May 31, 2010

-page 3-

An ‘action at a distance’ is a much contested concept in the history of physics. Aristotelian physics holds that every motion requires a conjoined mover. Action can therefore never occur at a distance, but needs a medium enveloping the body, and of which parts befit its motion and pushes it from behind (antiperistasis). Although natural motions like free fall and magnetic attraction (quaintly called ‘coition’) were recognized in the post-Aristotelian period, the rise of the ‘corpusularian’ philosophy. Boyle expounded in his Sceptical Chemist (1661) and The Origin and Form of Qualifies (1666), held that all material substances are composed of minutes corpuscles, themselves possessing shape, size, and motion. The different properties of materials would arise different combinations and collisions of corpuscles: chemical properties, such as solubility, would be explicable by the mechanical interactions of corpuscles, just as the capacity of a key to turn a lock is explained by their respective shapes. In Boyle’s hands the idea is opposed to the Aristotelean theory of elements and principles, which he regarded as untestable and sterile. His approach is a precursor of modern chemical atomism, and had immense influence on Locke, however, Locke recognized the need for a different kind of force guaranteeing the cohesion of atoms, and both this and the interaction between such atoms were criticized by Leibniz. Although natural motion like free fall and magnetic attraction (quality called ‘coition’) were recognized in the post-Aristotelian period , the rise of the ‘corpusularian’ philosophy again banned ‘attraction; or unmediated action at a distance: the classic argument is that ‘matter cannot act where it is not again banned ‘attraction’, or unmediated action at a distance: The classic argument is that ‘matter cannot act where it is not’.


Cartesian physical theory also postulated ‘subtle matter’ to fill space and provide the medium for force and motion. Its successor, the a ether, was populated in order to provide a medium for transmitting forces and causal influences between objects that are not in directorially contact. Even Newton, whose treatment of gravity might seem to leave it conceived of a action to a distance, opposed that an intermediary must be postulated, although he could make no hypothesis as to its nature. Locke, having originally said that bodies act on each other ‘manifestly by impulse and nothing else’. However, changes his mind and strike out the words ‘and nothing else,’ although impulse remains ‘the only way that we can conceive bodies function in’. In the Metaphysical Foundations of Natural Science Kant clearly sets out the view that the way in which bodies impulse each other is no more natural, or intelligible, than the way inn that they act at a distance, in particular he repeats the point half-understood by Locke, that any conception of solid, massy atoms requires understanding the force that makes them cohere as a single unity, which cannot itself be understood in terms of elastic collisions. In many cases contemporary field theories admit of alternative equivalent formulations, one with action at a distance, one with local action only.

Two theories unveiled and unfolding as their phenomenal yield held by Albert Einstein, attributively appreciated that the special theory of relativity (1915) and, also the tangling and calculably arranging affordance, as drawn upon the gratifying nature whom by encouraging the finding resolutions upon which the realms of its secreted reservoir in continuous phenomenons, in additional the continuatives as afforded by the efforts by the imagination were made discretely available to any the unsurmountable achievements, as remain obtainably afforded through the excavations underlying the artifactual circumstances that govern all principle ‘forms’ or ‘types’ in the involving evolutionary principles of the general theory of relativity (1915).

Where the special theory gives a unified account of the laws of mechanics and of electromagnetism, including optics. Before 1905 the purely relative nature of uniform motion had in part been recognized in mechanics, although Newton had considered time to be absolute and postulated absolute space. In electromagnetism the ether was supposed to give an absolute bases respect to which motion could be determined. The Galilean transformation equations represent the set of equations:

χʹ = χ ‒ vt

yʹ = y

zʹ = z

tʹ = t

They are used for transforming the parameters of position and motion from an observer at the point ‘O’ with co-ordinates (z, y, z) to an observer at Oʹ with co-ordinates (χʹ, yʹ, zʹ). The axis is chosen to pass through O and Oʹ. The times of an event at ‘t’ and tʹ in the frames of reference of observers at O and Oʹ coincided. ‘V’ is the relative velocity of separation of O and Oʹ. The equation conforms to Newtonian mechanics as compared with the Lorentz transformation equations, it represents a set of equations for transforming the position-motion parameters from an observer at a point O(χ, y, z) to an observer at Oʹ(χʹ, yʹ, zʹ), moving compared with one another. The equation replaces the Galilean transformation equation of Newtonian mechanics in reactivity problems. If the x-axes are chosen to pass through Oʹ and the time of an event are t and tʹ in the frame of reference of the observers at O and Oʹ respectively, where the zeros of their time scales were the instants that O and Oʹ supported the equations are:

χʹ = β(χ ‒ vt)

yʹ = y

zʹ =z

tʹ = β( t ‒ vχ / c2),

Where ‘v’ is the relative velocity of separation of O, Oʹ, c is the speed of light, and ‘β’ is the function:

(1 ‒ v2/c2)-½.

Newton’s laws of motion in his ‘Principia,’ Newton (1687) stated the three fundamental laws of motion, which are the basis of Newtonian mechanics. The First Law of acknowledgement concerns that all bodies persevere in its state of rest, or uniform motion in a straight line, but in as far as it is compelled, to change that state by forces impressed on it. This may be regarded as a definition of force. The Second Law to acknowledge is, that the rate of change of linear momentum is propositional to the force applied, and takes place in the straight line in which that force acts. This definition can be regarded as formulating a suitable way by which forces may be measured, that is, by the acceleration they produce:

F = d( mv )/dt

i.e., F = ma = v( dm/dt ),

Of where F = force, m = masses, v = velocity, t = time, and ‘a’ = acceleration, from which case, the proceeding majority of quality values were of non-relativistic cases: dm/dt = 0, i.e., the mass remains constant, and then:

F = ma.

The Third Law acknowledges, that forces are caused by the interaction of pairs of bodies. The forces exerted by ‘A’ upon ‘B’ and the force exerted by ‘B’ upon ‘A’ are simultaneous, equal in magnitude, opposite in direction and in the same straight line, caused by the same mechanism.

Appreciating the popular statement of this law about significant ‘action and reaction’ leads too much misunderstanding. In particular, any two forces that happen to be equal and opposite if they act on the same body, one force, arbitrarily called ‘reaction,’ are supposed to be a consequence of the other and to happen subsequently, as two forces are supposed to oppose each other, causing equilibrium, certain forces such as forces exerted by support or propellants are conventionally called ‘reaction,’ causing considerable confusion.

The third law may be illustrated by the following examples. The gravitational force exerted by a body on the earth is equal and opposite to the gravitational force exerted by the earth on the body. The intermolecular repulsive forces exerted on the ground by a body resting on it, or hitting it, is equal and opposite to the intermolecular repulsive force exerted on the body by the ground. More general system of mechanics has been given by Einstein in his theory of relativity. This reduces to Newtonian mechanics when all velocities compared with the observer are small compared with those of light.

Einstein rejected the concept of absolute space and time, and made two postulates (I) The laws of nature are the same for all observers in uniform relative motion, and (ii) The speed of light in the same for all such observers, independently of the relative motions of sources and detectors. He showed that these postulates were equivalent to the requirement that co-ordinates of space and time used by different observers should be related by Lorentz transformation equations. The theory has several important consequences.

The transformation of time implies that two events that are simultaneous according to one observer will not necessarily be so according to another in uniform relative motion. This does not affect the construct of its sequence of related events so does not violate any conceptual causation. It will appear to two observers in uniform relative motion that each other’s clock runs slowly. This is the phenomenon of ‘time dilation’, for example, an observer moving with respect to a radioactive source finds a longer decay time than found by an observer at rest with respect to it, according to:

Tv = T0/(1 ‒ v2/c2) ½

Where Tv is the mean life measurement by an observer at relative speed ‘v’, and T is the mean life maturement by an observer at rest, and ‘c’ is the speed of light.

This formula has been verified in innumerable experiments. One consequence is that no body can be accelerated from a speed below ‘c’ with respect to any observer to one above ‘c’, since this would require infinite energy. Einstein educed that the transfer of energy δE by any process entailed the transfer of mass δm where δE = δmc2, so he concluded that the total energy ‘E’ of any system of mass ‘m’ would be given by:

E = mc2

The principle of conservation of mass states that in any system is constant. Although conservation of mass was verified in many experiments, the evidence for this was limited. In contrast the great success of theories assuming the conservation of energy established this principle, and Einstein assumed it as an axiom in his theory of relativity. According to this theory the transfer of energy ‘E’ by any process entails the transfer of mass m = E/c2. Therefore, the conservation of energy ensures the conservation of mass.

In Einstein’s theory inertial energy’. This leads to alternate statements of the principle, in which terminology is not generally consistent. Whereas, the law of equivalence of mass and energy such that mass ‘m’ and energy ‘E’ are related by the equation E = mc2, where ‘c’ is the speed of light in a vacuum. Thus, a quantity of energy ‘E’ has a mass ‘m’ and a mass ‘m’ has intrinsic energy ‘E’. The kinetic energy of a particle as determined by an observer with relative speed ‘v’ is thus (m - m0)c2, which tends to the classical value ½mv2 if ≪ C.

Attempts to express quantum theory in terms consistent with the requirements of relativity were begun by Sommerfeld (1915), eventually. Dirac (1928) gave a relativistic formulation of the wave mechanics of conserved particles (fermions). This explained the concept of spin and the associated magnetic moment, which had been postulated to account for certain details of spectra. The theory led to results very important for the theory of standard or elementary particles. The Klein-Gordon equation is the relativistic wave equation for ‘bosons’. It is applicable to bosons of zero spin, such as the ‘pion’. In which case, for example the Klein-Gordon Lagrangian describes a single spin-0, scalar field:

L = ½[∂t∂t‒ ∂y∂y‒ ∂z∂z] ‒ ½(2πmc / h)22

Then:

∂L/∂(∂) = ∂μ

leading to the equation:

∂L/∂ = (2πmc/h)22+

and therefore the Lagrange equation requires that:

∂μ∂μ + (2πmc/h)2 2 = 0.

Which is the Klein-Gordon equation describing the evolution in space and time of field ‘’? Individual ‘’ excitation of the normal modes of particles of spin-0, and mass ‘m’.

A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by there being a four-dimensional co-ordinates, three of which are spatial co-ordinates and one in a dimensional frame in a time co-ordinates. These continuously of dimensional coordinate give to define a four-dimensional space and the motion of a particle can be described by a curve in this space, which is called ‘Minkowski space-time.’ In certain formulations of the theory, use is made of a four-dimensional coordinate system in which three dimensions represent the spatial co-ordinates χ, y, z and the fourth dimension are ‘ict’, where ‘t’ is time, ‘c’ is the speed of light and ‘I’ is √-1, points in this space are called events. The equivalent to the distance between two points is the interval (δs) between two events given by the Pythagoras law in a space-time as:

(δs)2 = ij ηij δ χI χj

Where: χ = χ1, y = χ2, z = χ3 . . . , t = χ4 and η11 (χ) η33 (χ) = 1? η44 (χ)=1.

Where components of the Minkowski metric tensor are the distances between two points are variant under the ‘Lorentz transformation’, because the measurements of the positions of the points that are simultaneous according to one observer in uniform motion with respect to the first. By contrast, the interval between two events is invariant.

The equivalents to a vector in the four-dimensional space are consumed by a ‘four vector’, in which has three space components and one of time component. For example, the four -vector momentum has a time component proportional to the energy of a particle, the four-vector potential has the space co-ordinates of the magnetic vector potential, while the time co-ordinates corresponds to the electric potential.

The special theory of relativity is concerned with relative motion between Nonaccelerated frames of reference. The general theory reals with general relative motion between accelerated frames of reference. In accelerated systems of reference, certain fictitious forces are observed, such as the centrifugal and Coriolis forces found in rotating systems. These are known as fictitious forces because they disappear when the observer transforms to a Nonaccelerated system. For example, to an observer in a car rounding a bend at constant velocity, objects in the car appear to suffer a force acting outward. To an observer outside the car, this is simply their tendency to continue moving in a straight line. The inertia of the objects is seen to cause a fictitious force and the observer can distinguish between non-inertial (accelerated) and inertial (Nonaccelerated) frames of reference.

A further point is that, to the observer in the car, all the objects are given the same acceleration despite their mass. This implies a connection between the fictitious forces arising from accelerated systems and forces due to gravity, where the acceleration produced is independent of the mass. Near the surface of the earth the acceleration of free fall, ‘g’, is measured with respect to a nearby point on the surface. Because of the axial rotation the reference point is accelerated to the centre of the circle of its latitude, so ‘g’ is not quite in magnitude or direction to the acceleration toward the centre of the earth given by the theory of ‘gravitation’, in 1687 Newton presented his law of universal gravitation, according to which every particle evokes every other particle with the force, ‘F’ given by:

F = Gm1 m2/ χ2,

Where ‘m’ is the masses of two particles a distance ‘χ’ apart, and ‘G’ is the gravitational constant, which, according to modern measurements, has a value:

6.672 59 x 10-11 m3 kg -1 s -2.

For extended bodies the forces are found by integrations. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetrical so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law was consistent with Kepler’s Laws. Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.

The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘χ’ from a point mass ‘m’ is therefore Gm/χ2, and acts toward ‘m’ Gravitational field strength is measured in the newton per kilogram. The gravitational potential ‘V’ at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass. Importantly, (a) Potential at a point distance ‘χ’ from the centre of a hollow homogeneous spherical shell of mass ‘m’ and outside the shell:

V = ‒ Gm/χ

The potential is the same as if the mass of the shell is assumed concentrated at the centre, (b) At any point inside the spherical shell the potential is equal to its value at the surface:

V = ‒ Gm/r

Where ‘r’ is the radius of the shell, thus there is no resultant force acting at any point inside the shell and since no potential difference acts between any two points potential at a point distance ‘χ’ from the centre of a homogeneous solid sphere as for it being outside the sphere is the same as that for a shell:

V = ‒ Gm/χ

(d) At a point inside the sphere, of radius ‘r’:

V = ‒ Gm(3r2 ‒ χ2)/2r3

The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth’s gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space and time, causing it to become curved. It is this curvature of space and time, produced by the presence of matter, that controls the natural motions of matter, that controls the natural motions of bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black holes’ and ‘neutron stars’, or when very accurate measurements can be made.

Accelerated systems and forces due to gravity, where the acceleration produced are independent of the mass, for example, a person in a sealed container could not easily determine whether he was being driven toward the floor by gravity or if the container were in space and being accelerated upward by a rocket. Observations extended in space and time could distinguish between these alternates, but otherwise they are indistinguishable. This leads to the ‘principle of equivalence’, from which it follows that the inertial mass is the same as the gravitational mass. A further principle used in the general theory is that the laws of mechanics are the same in inertial and non-inertial frames of reference.

Still, the equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from Minkowski Space-time of the special theory. In special relativity the motion of a particle that is not acted on by any force is represented by a straight line in Minkowski Space-time. Overall, using Riemannian Space-time, the motion is represented by a line that is no longer straight, in the Euclidean sense but is the line giving the shortest distance. Such a line is called geodesic. Thus, a space-time is said to be curved. The extent of this curvature is given by the ‘metric tensor’ for space-time, the components of which are solutions to Einstein’s ‘field equations’. The fact that gravitational effects occur near masses is introduced by the postulate that the presence of matter produces this curvature of the space-time. This curvature of space-time controls the natural motions of bodies.

The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carried out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiations in the presence of large bodies, and the Einstein Shift. Very close agreements between the predications of general relativity and their accurately measured values have now been obtained. This ‘Einstein shift’ or ‘gravitation red-shift’ hold that a small ‘red-shift’ in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiation is emitted (for a bright line) or absorbed (for a dark line). This shift can be explained in terms of either, . . . others have maintained that the construction is fundamentally providing by whichever number is assigned in that of what should be the speed or , by contrast, in the easiest of terms, a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference φ, the work done is φhv/c2 so the change of frequency δv is φv/c2.

Assumptions given under which Einstein’s special theory of relativity (1905) stretches toward its central position are (i) inertial frameworks are equivalent for the description of all physical phenomena, and (ii) the speed of light in empty space is constant for every observer, despite the motion of the observer or the light source, although the second assumption may seem plausible in the light of the Mitchelton-Morley experiment of 1887, which failed to find any difference in the speed of light in the direction of the earth’s rotation or when measured perpendicular to it, it seems likely that Einstein was not influenced by the experiment, and may not even have known the results. Because of the second postulate, no matter how fast she travels, an observer can never overtake a ray of light, and see it as stationary beside her. However, near her speed approaches to that of light, light still retreats at its classical speed. The consequences are that space, time and mass turn relative to the observer. Measurements composed of quantities in an inertial system moving relative to one’s own reveal slow clocks, with the effect increasing as the relative speed of the systems approaches the speed of light. Events deemed simultaneously as measured within one such system will not be simultaneous as measured from the other, forthrightly time and space thus lose their separate identity, and become parts of a single space-time. The special theory also has the famous consequence (E = mc2) of the equivalences of energy and mass.

Einstein’s general theory of relativity (1916) treats of non -inertial systems, i.e., those accelerating relative to each other. The leading idea is that the laws of motion in an accelerating frame are equivalent to those in a gravitational field. The theory treats gravity not as a Newtonian force acting in an unknown way across distance, but a metrical property of a space-time continuum curved near matter. Gravity can be thought of as a field described by the metric tensor at every point. The first serious non-Euclidean geometry is usually attributed to the Russian mathematician N.I. Lobachevski, writing in the 1820's, Euclid’s fifth axiom, the axiom of parallels, states that through any points not falling on a straight line, one straight line can be drawn that does not intersect the fist. In Lobachevski’s geometry several such lines can exist. Later G.F.B. Riemann (1822-66) realized that the two-dimensional geometry that would be hit upon by persons coffined to the surface of a sphere would be different from that of persons living on a plane: for example, π would be smaller, since the diameter of a circle, as drawn on a sphere, is largely compared with the circumference. Generalizing, Riemann reached the idea of a geometry in which there are no straight lines that do not intersect a given straight line, just as on a sphere all great circles (the shortest distance between two points) intersect.

The way then lay open to separating the question of the mathematical nature of a purely formal geometry from a question of its physical application. In 1854 Riemann showed that space of any curvature could be described by a set of numbers known as its metric tensor. For example, ten numbers suffice to describe the point of any four-dimensional manifold. To apply a geometry means finding coordinative definitions correlating the notion of the geometry, notably those of a straight line and an equal distance, with physical phenomena such as the path of a light ray, or the size of a rod at different times and places. The status of these definitions has been controversial, with some such as Poincaré seeing them simply as conventions, and others seeing them as important empirical truths. With the general rise of holism in the philosophy of science the question of status has abated a little, it being recognized simply that the coordination plays a fundamental role in physical science.

Meanwhile, the classic analogy of curved space-time is while a rock sitting on a bed. If a heavy objects where to be thrown across the bed, it is deflected toward the rock not by a mysterious force, but by the deformation of the space, i.e., the depression of the sheet around the object, a called curvilinear trajectory. Interestingly, the general theory lends some credit to a vision of the Newtonian absolute theory of space, in the sense that space itself is regarded as a thing with metrical properties of it is. The search for a unified field theory is the attempt to show that just as gravity is explicable because of the nature of a space-time, are the other fundamental physical forces: The strong and weak nuclear forces, and the electromagnetic force. The theory of relativity is the most radical challenge to the ‘common sense’ view of space and time as fundamentally distinct from each other, with time as an absolute linear flow in which events are fixed in objective relationships.

After adaptive changes in the brains and bodies of hominids made it possible for modern humans to construct a symbolic universe using complex language system, something as quite dramatic and wholly unprecedented occurred. We began to perceive the world through the lenses of symbolic categories, to construct similarities and differences in terms of categorical priorities, and to organize our lives according to themes and narratives. Living in this new symbolic universe, modern humans had a large compulsion to encode and recode experiences, to translate everything into representation, and to seek out the deeper hidden and underlying logic that eliminates inconsistencies and ambiguities.

The mega-narratives or frame tale served to legitimate and rationalize the categorical oppositions and terms of relations between the myriad number of constructs in the symbolic universe of modern humans were religion. The use of religious thought for these purposes is quite apparent in the artifacts found in the fossil remains of people living in France and Spain forty thousand years ago. These artifacts provided the first concrete evidence that a fully developed language system had given birth to an intricate and complex social order.

Both religious and scientific thought seeks to frame or construct reality as to origins, primary oppositions, and underlying causes, and this partially explains why fundamental assumptions in the Western metaphysical tradition were eventually incorporated into a view of reality that would later be called scientific. The history of scientific thought reveals that the dialogue between assumptions about the character of spiritual reality in ordinary language and the character of physical reality in mathematical language was intimate and ongoing from the early Greek philosophers to the first scientific revolution in the seventeenth century. However, this dialogue did not conclude, as many have argued, with the emergence of positivism in the eighteenth and nineteenth centuries. It was perpetuated in a disguise form in the hidden ontology of classical epistemology-the central issue in the Bohr-Einstein debate.

The assumption that a one-to-one correspondence exists between every element of physical reality and physical theory may serve to bridge the gap between mind and world for those who use physical theories. Still, it also suggests that the Cartesian division be real and insurmountable in constructions of physical reality based on ordinary language. This explains in no small part why the radical separation between mind and world sanctioned by classical physics and formalized by Descartes (1596-1650) remains, as philosophical postmodernism attests, one of the most pervasive features of Western intellectual life.

Nietzsche, in subverting the epistemological authority of scientific knowledge, sought of a legitimate division between mind and world much starker than that originally envisioned by Descartes. What is not widely known, however, is that Nietzsche and other seminal figures in the history of philosophical postmodernism were very much aware of an epistemological crisis in scientific thought than arose much earlier, that occasioned by wave-particle dualism in quantum physics. This crisis resulted from attempts during the last three decades of the nineteenth century to develop a logically self-consistent definition of number and arithmetic that would serve to reinforce the classical view of correspondence between mathematical theory and physical reality. As it turned out, these efforts resulted in paradoxes of recursion and self-reference that threatened to undermine both the efficacy of this correspondence and the privileged character of scientific knowledge.

Nietzsche appealed to this crisis to reinforce his assumption that, without ontology, all knowledge (including scientific knowledge) was grounded only in human consciousness. As the crisis continued, a philosopher trained in higher mathematics and physics, Edmund Husserl 1859-1938, attempted to preserve the classical view of correspondences between mathematical theory and physical reality by deriving the foundation of logic and number from consciousness in ways that would preserve self-consistency and rigour. This afforded effort to ground mathematical physics in human consciousness, or in human subjective reality, was no trivial matter, representing a direct link between these early challenges and the efficacy of classical epistemology and the tradition in philosophical thought that culminated in philosophical postmodernism.

Since Husserl’s epistemology, like that of Descartes and Nietzsche, was grounded in human subjectivity, a better understanding of his attempt to preserve the classical view of correspondence not only reveals more about the legacy of Cartesian dualism. It also suggests that the hidden and underlying ontology of classical epistemology was more responsible for the deep division and conflict between the two cultures of humanists-social scientists and scientists-engineers than was previously thought. The central question in this late-nineteenth-century debate over the status of the mathematical description of nature was the following: Is the foundation of number and logic grounded in classical epistemology, or must we assume, without any ontology, that the rules of number and logic are grounded only in human consciousness? In order to frame this question in the proper context, we should first examine a more detailing of the intimate and on-line dialogue between physics and metaphysics in Western thought.

The history of science reveals that scientific knowledge and method did not emerge as full-blown from the minds of the ancient Greek any more than language and culture emerged fully formed in the minds of Homo sapient’s sapient. Scientific knowledge is an extension of ordinary language into grater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political and an economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation. Nevertheless, it was only after this inheritance from Greek philosophy was wedded to some essential features of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.

The philosophical debate that led to conclusions useful to the architects of classical physics can be briefly summarized, such when Thale’s fellow Milesian Anaximander claimed that the first substance, although indeterminate, manifested itself in a conflict of oppositions between hot and cold, moist and dry. The idea of nature as a self-regulating balance of forces was subsequently elaborated upon by Heraclitus (d. after 480 Bc), who asserted that the fundamental substance is strife between opposites, which is itself the unity of the whole. It is, said Heraclitus, the tension between opposites that keeps the whole from simply ‘passing away.’

Parmenides of Elea (Bc 515 BC) argued in turn that the unifying substance is unique and static being. This led to a conclusion about the relationship between ordinary language and external reality that was later incorporated into the view of the relationship between mathematical language and physical reality. Since thinking or naming involves the presence of something, said Parmenides, thought and language must be dependent upon the existence of objects outside the human intellect. Presuming a one-to-one correspondence between word and idea and actual existing things, Parmenides concluded that our ability to think or speak of a thing at various times implies that it exists at all times. So the indivisible One does not change, and all perceived change is an illusion.

These assumptions emerged in roughly the form in which they would be used by the creators of classical physics in the thought of the atomists. Leucippus : l. 450-420 Bc and Democritus (460-c. 370 Bc). They reconciled the two dominant and seemingly antithetical concepts of the fundamental character of being Becoming, (Heraclitus) and unchanging Being (Parmenides)-in a remarkable simple and direct way. Being, they said, is present in the invariable substance of the atoms that, through blending and separation, make up the thing of changing or becoming worlds.

The last remaining feature of what would become the paradigm for the first scientific revolution in the seventeenth century is attributed to Pythagoras (570 Bc). Like Parmenides, Pythagoras also held that the perceived world is illusory and that there is an exact correspondence between ideas and aspects of external reality. Pythagoras, however, had a different conception of the character of the idea that showed this correspondence. The truth about the fundamental character of the unified and unifying substance, which could be uncovered through reason and contemplation, is, he claimed, mathematical in form.

Pythagoras established and was the cental figure in a school of philosophy, religion and mathematics; He was apparently viewed by his followers as semi-divine. For his followers the regular solids (symmetrical three-dimensional forms in which all sides are the same regular polygons) and whole numbers became revered essences of sacred ideas. In contrast with ordinary language, the language of mathematics and geometric forms seemed closed, precise and pure. Providing one understood the axioms and notations, and the meaning conveyed was invariant from one mind to another. The Pythagoreans felt that the language empowered the mind to leap beyond the confusion of sense experience into the realm of immutable and eternal essences. This mystical insight made Pythagoras the figure from antiquity most revered by the creators of classical physics, and it continues to have great appeal for contemporary physicists as they struggle with the epistemological implications of the quantum mechanical description of nature.

Yet, least of mention, progress was made in mathematics, and to a lesser extent in physics, from the time of classical Greek philosophy to the seventeenth century in Europe. In Baghdad, for example, from about A.D. 750 to A.D. 1000, substantial advancement was made in medicine and chemistry, and the relics of Greek science were translated into Arabic, digested, and preserved. Eventually these relics reentered Europe via the Arabic kingdom of Spain and Sicily, and the work of figures like Aristotle (384-32 Bc) and Ptolemy (127-148 AD) reached the budding universities of France, Italy, and England during the Middle Ages.

For much of this period the Church provided the institutions, like the reaching orders, needed for the rehabilitation of philosophy. Nonetheless, the social, political and an intellectual climate in Europe was not ripe for a revolution in scientific thought until the seventeenth century. Until later in time, lest as far into the nineteenth century, the works of the new class of intellectuals we called scientists, whom of which were more avocations than vocation, and the word scientist do not appear in English until around 1840.

Copernicus (1473-1543) would have been described by his contemporaries as an administrator, a diplomat, an avid student of economics and classical literature, and most notable, a highly honoured and placed church dignitary. Although we named a revolution after him, his devoutly conservative man did not set out to create one. The placement of the Sun at the centre of the universe, which seemed right and necessary to Copernicus, was not a result of making careful astronomical observations. In fact, he made very few observations while developing his theory, and then only to ascertain if his prior conclusions seemed correct. The Copernican system was also not any more useful in making astrological calculations than the accepted model and was, in some ways, much more difficult to implement. What, then, was his motivation for creating the model and his reasons for presuming that the model was correct?

Copernicus felt that the placement of the Sun at the centre of the universe made sense because he viewed the Sun as the symbol of the presence of a supremely intelligent and intelligible God in a man-centred world. He was apparently led to this conclusion in part because the Pythagoreans believed that fire exists at the centre of the cosmos, and Copernicus identified this fire with the fireball of the Sun. the only support that Copernicus could offer for the greater efficacy of his model was that it represented a simpler and more mathematical harmonious model of the sort that the Creator would obviously prefer. The language used by Copernicus in ‘The Revolution of Heavenly Orbs,’ illustrates the religious dimension of his scientific thought: ‘In the midst of all the sun reposes, unmoving. It is more difficult to who is attributed to this most beautiful temple would place the light-giver in any other part than from where it can illumine all other parts?’

The belief that the mind of God as Divine Architect permeates the working of nature was the principle of the scientific thought of Johannes Kepler (or, Keppler, 1571-1630). Therefore, most modern physicists would probably feel some discomfort in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. Physical laws, wrote Kepler, ‘lie within the power of understanding of the human mind; God wanted us to perceive them when he created us of His own image, in order . . . ‘. , that we may take part in His own thoughts. Our knowledge of numbers and quantities is the same as that of God’s, at least insofar as we can understand something of it in this mortal life.’

Believing, like Newton after him, in the literal truth of the words of the Bible, Kepler concluded that the word of God is also transcribed in the immediacy of observable nature. Kepler’s discovery that the motions of the planets around the Sun were elliptical, as opposed perfecting circles, may have made the universe seem a less perfect creation of God on ordinary language. For Kepler, however, the new model placed the Sun, which he also viewed as the emblem of a divine agency, more at the centre of mathematically harmonious universes than the Copernican system allowed. Communing with the perfect mind of God requires as Kepler put it ‘knowledge of numbers and quantity.’

Since Galileo did not use, or even refer to, the planetary laws of Kepler when those laws would have made his defence of the heliocentric universe more credible, his attachment to the god-like circle was probably a more deeply rooted aesthetic and religious ideal. However, it was Galileo, even more than Newton, who was responsible for formulating the scientific idealism that quantum mechanics now force us to abandon. In ‘Dialogue Concerning the Two Great Systems of the World,’ Galileo said about the following about the followers of Pythagoras: ‘I know perfectly well that the Pythagoreans had the highest esteem for the science of number and that Plato himself admired the human intellect and believed that it participates in divinity solely because understanding the nature of numbers is able. I myself am inclined to make the same judgement.’

This article of faith-mathematical and geometrical ideas mirror precisely the essences of physical reality was the basis for the first scientific law of this new science, a constant describing the acceleration of bodies in free fall, could not be confirmed by experiment. The experiments conducted by Galileo in which balls of different sizes and weights were rolled simultaneously down an inclined plane did not, as he frankly admitted, their precise results. Since a vacuum pumps had not yet been invented, there was simply no way that Galileo could subject his law to rigorous experimental proof in the seventeenth century. Galileo believed in the absolute validity of this law lacking experimental proof because he also believed that movement could be subjected absolutely to the law of number. What Galileo asserted, as the French historian of science Alexander Koyré put it, was ‘that the real are in its essence, geometrical and, consequently, subject to rigorous determination and measurement.’

The popular image of Isaac Newton (1642-1727) is that of a supremely rational and dispassionate empirical thinker. Newton, like Einstein, could concentrate unswervingly on complex theoretical problems until they yielded a solution. Yet what most consumed his restless intellect were not the laws of physics. Beyond believing, like Galileo that the essences of physical reality could be read in the language of mathematics, Newton also believed, with perhaps even greater intensity than Kepler, in the literal truths of the Bible.

For Newton the mathematical languages of physics and the language of biblical literature were equally valid sources of communion with the eternal writings in the extant documents alone consist of more than a million words in his own hand, and some of his speculations seem quite bizarre by contemporary standards. The Earth, said Newton, will still be inhabited after the day of judgement, and heaven, or the New Jerusalem, must be large enough to accommodate both the quick and the dead. Newton then put his mathematical genius to work and determined the dimensions required to house the population, his precise estimate was ‘the cube root of 12,000 furlongs.’

The point is, that during the first scientific revolution the marriage between mathematical idea and physical reality, or between mind and nature via mathematical theory, was viewed as a sacred union. In our more secular age, the correspondence takes on the appearance of an unexamined article of faith or, to borrow a phrase from William James (1842-1910), ‘an altar to an unknown god.’ Heinrich Hertz, the famous nineteenth-century German physicist, nicely described what there is about the practice of physics that tends to inculcate this belief: ‘One cannot escape the feeling that these mathematical formulae have an independent existence and intelligence of their own that they are wiser than we, wiser than their discoveries. That we get more out of them than was originally put into them.’

While Hertz said that without having to contend with the implications of quantum mechanics, the feeling, the described remains the most enticing and exciting aspects of physics. That elegant mathematical formulae provide a framework for understanding the origins and transformations of a cosmos of enormous age and dimensions are a staggering discovery for bidding physicists. Professors of physics do not, of course, tell their students that the study of physical laws in an act of communion with thee perfect mind of God or that these laws have an independent existence outside the minds that discover them. The business of becoming a physicist typically begins, however, with the study of classical or Newtonian dynamics, and this training provides considerable covert reinforcement of the feeling that Hertz described.

Perhaps, the best way to examine the legacy of the dialogue between science and religion in the debate over the implications of quantum non-locality is to examine the source of Einstein’s objections tp quantum epistemology in more personal terms. Einstein apparently lost faith in the God portrayed in biblical literature in early adolescence. However, as appropriated, . . . the ‘Autobiographical Notes’ give to suggest that there were aspects that carry over into his understanding of the foundation for scientific knowledge, . . . ‘Thus I came-despite the fact that I was the son of an entirely irreligious [Jewish] Breeden heritage, which is deeply held of its religiosity, which, however, found an abrupt end at the age of twelve. Though the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence waw a positively frantic [orgy] of freethinking coupled with the impression that youth is intentionally being deceived by the stat through lies that it was a crushing impression. Suspicion against every kind of authority grew out of this experience. It was clear to me that the religious paradise of youth, which was thus lost, was a first attempt ti free myself from the chains of the ‘merely personal’. The mental grasp of this extra-personal world within the frame of the given possibilities swam as highest aim half consciously and half unconsciously before the mind’s eye.’

What is more, was, suggested Einstein, belief in the word of God as it is revealed in biblical literature that allowed him to dwell in a ‘religious paradise of youth’ and to shield himself from the harsh realities of social and political life. In an effort to recover that inner sense of security that was lost after exposure to scientific knowledge, or to become free again of the ‘merely personal’, he committed himself to understanding the ‘extra-personal world within the frame of given possibilities’, or as seems obvious, to the study of physics. Although the existence of God as described in the Bible may have been in doubt, the qualities of mind that the architects of classical physics associated with this God were not. This is clear in the comments from which Einstein uses of mathematics, . . . ‘Nature is the realization of the simplest conceivable mathematical ideas and one may be convinced that we can discover, by means of purely mathematical construction, those concepts and those lawful connections between them that furnish the key to the understanding of natural phenomena. Experience remains, of course, the sole criteria of physical utility of a mathematical construction. Nevertheless, the creative principle resides in mathematics. In a certain sense, therefore, it is true that pure thought can grasp reality, as the ancients dreamed.’

This article of faith, first articulated by Kepler, that ‘nature is the realization of the simplest conceivable mathematical ideas’ allowed for Einstein to posit the first major law of modern physics much as it allows Galileo to posit the first major law of classical physics. During which time, when the special and then the general theories of relativity had not been confirmed by experiment. Many established physicists viewed them as at least minor theorises, Einstein remained entirely confident of their predictions. Ilse Rosenthal-Schneider, who visited Einstein shortly after Eddington’s eclipse expedition confirmed a prediction of the general theory(1919), described Einstein’s response to this news: ‘When I was giving expression to my joy that the results coincided with his calculations, he said quite unmoved, ‘but I knew the theory was correct’ and when I asked, ‘what if there had been no confirmation of his prediction,’ he countered: ‘Then I would have been sorry for the dear Lord-the theory is correct.’

Einstein was not given to making sarcastic or sardonic comments, particularly on matters of religion. These unguarded responses testify to his profound conviction that the language of mathematics allows the human mind access to immaterial and immutable truths existing outside the mind that conceived them. Although Einstein’s belief was far more secular than Galileo’s, it retained the same essential ingredients.

What continued in the twenty-three-year-long debate between Einstein and Bohr, least of mention? The primary article drawing upon its faith that contends with those opposing to the merits or limits of a physical theory, at the heart of this debate was the fundamental question, ‘What is the relationship between the mathematical forms in the human mind called physical theory and physical reality?’ Einstein did not believe in a God who spoke in tongues of flame from the mountaintop in ordinary language, and he could not sustain belief in the anthropomorphic God of the West. There is also no suggestion that he embraced ontological monism, or the conception of Being featured in Eastern religious systems, like Taoism, Hinduism, and Buddhism. The closest that Einstein apparently came to affirming the existence of the ‘extra-personal’ in the universe was a ‘cosmic religious feeling’, which he closely associated with the classical view of scientific epistemology.

The doctrine that Einstein fought to preserve seemed the natural inheritance of physics until the approach of quantum mechanics. Although the mind that constructs reality might be evolving fictions that are not necessarily true or necessary in social and political life, there was, Einstein felt, a way of knowing, purged of deceptions and lies. He was convinced that knowledge of physical reality in physical theory mirrors the preexistent and immutable realm of physical laws. As Einstein consistently made clear, this knowledge mitigates loneliness and inculcates a sense of order and reason in a cosmos that might appear otherwise bereft of meaning and purpose.

What most disturbed Einstein about quantum mechanics was the fact that this physical theory might not, in experiment or even in principle, mirrors precisely the structure of physical reality. There is, for all the reasons we seem attested of, in that an inherent uncertainty in measurement made, . . . a quantum mechanical process reflects of a pursuit that quantum theory that has its contributive dynamic functionalities that there lay the attribution of a completeness of a quantum mechanical theory. Einstein’s fearing that it would force us to recognize that this inherent uncertainty applied to all of physics, and, therefore, the ontological bridge between mathematical theory and physical reality-does not exist. This would mean, as Bohr was among the first to realize, that we must profoundly revive the epistemological foundations of modern science.

The world view of classical physics allowed the physicist to assume that communion with the essences of physical reality via mathematical laws and associated theories was possible, but it did not arrange for the knowing mind. In our new situation, the status of the knowing mind seems quite different. Modern physics distributively contributed its view toward the universe as an unbroken, undissectable and undivided dynamic whole. ‘There can hardly be a sharper contrast,’ said Melic Capek, ‘than that between the everlasting atoms of classical physics and the vanishing ‘particles’ of modern physics as Stapp put it: ‘Each atom turns out to be nothing but the potentialities in the behaviour of others. What we find, therefore, are not elementary space-time realities, but preferable to a certain extent in some respects as a web of relationships in which no part can stand alone, every part derives its meaning and existence only from its place within the whole’’

The characteristics of particles and quanta are not isolatable, given particle-wave dualism and the incessant exchange of quanta within matter-energy fields. Matter cannot be dissected from the omnipresent sea of energy, nor can we in theory or in fact observe matter from the outside. As Heisenberg put it decades ago, ‘the cosmos is a complicated tissue of events, in which connection of different kinds alternate or overlay or combine and by that determine the texture of the whole. This means that a pure reductionist approach to understanding physical reality, which was the goal of classical physics, is no longer appropriate.

While the formalism of quantum physics predicts that correlations between particles over space-like separated regions are possible, it can say nothing about what this strange new relationship between parts (quanta) and whole (cosmos) was by means an outside formalism. This does not, however, prevent us from considering the implications in philosophical terms, as the philosopher of science Errol Harris noted in thinking about the special character of wholeness in modern physics, a unity without internal content is a blank or empty set and is not recognizable as a whole. A collection of merely externally related parts does not constitute a whole in that the parts will not be ‘mutually adaptive and complementary to one and another.’

Wholeness requires a complementary relationship between unity and differences and is governed by a principle of organization determining the interrelationship between parts. This organizing principle must be universal to a genuine whole and implicit in all parts that make up the whole, although the whole is exemplified only in its parts. This principle of order, Harris continued, ‘is nothing really by itself. It is the way parts are organized and not another constituent addition to those that form the totality.’

In a genuine whole, the relationship between the constituent parts must be ‘internal or immanent’ in the parts, as opposed to a mere spurious whole in which parts appear to disclose wholeness due to relationships that are external to the parts. The collection of parts that would allegedly make up the whole in classical physics is an example of a spurious whole. Parts were some genuine wholes when the universal principle of order is inside the parts and by that adjusts each to all that they interlock and become mutually complementary. This not only describes the character of the whole revealed in both relativity theory and quantum mechanics. It is also consistent with the manner in which we have begun to understand the relation between parts and whole in modern biology.

Modern physics also reveals, claims Harris, a complementary relationship between the differences between parts that constituted content representations that the universal ordering principle that is immanent in each part. While the whole cannot be finally revealed in the analysis of the parts, the study of the differences between parts provides insights into the dynamic structure of the whole present in each of the parts. The part can never, nonetheless, be finally isolated from the web of relationships that disclose the interconnections with the whole, and any attempt to do so results in ambiguity.

Much of the ambiguity is an attempted explanation of the characterology of wholes in both physics and biology, deriving from the assumption that order exists between or outside parts. Yet order in complementary relationships between differences and sameness in any physical event is never external to that event and finds to its event for being subjective. From this perspective, the addition of non-locality to this picture of the dynamic whole is not surprising. The relationship between part, as quantum event apparent in observation or measurement, and the inseparable whole, revealed but not described by the instantaneous, and the inseparable whole, revealed but described by the instantaneous correlations between measurements in space-like separated regions, is another extension of the part-whole complementarity to modern physics.

If the universe is a seamlessly interactive system that evolves to a higher level of complexity, and if the lawful regularities of this universe are emergent properties of this system, we can assume that the cosmos is a singular point of significance as a whole that shows of the ‘progressive principal order’ of complementary relations its parts. Given that this whole exists in some sense within all parts (quanta), one can then argue that it operates in self-reflective fashion and is the ground for all emergent complexities. Since human consciousness shows self-reflective awareness in the human brain and since this brain, like all physical phenomena can be viewed as an emergent property of the whole, concluding it is reasonable, in philosophical terms at least, that the universe is conscious.

However, since the actual character of this seamless whole cannot be represented or reduced to its parts, it lies, quite literally beyond all human representations or descriptions. If one chooses to believe that the universe be a self-reflective and self-organizing whole, this lends no support whatever to conceptions of design, meaning, purpose, intent, or plan associated with any mytho-religious or cultural heritage. However, If one does not accept this view of the universe, there is nothing in the scientific descriptions of nature that can be used to refute this position. On the other hand, it is no longer possible to argue that a profound sense of unity with the whole, which has long been understood as the foundation of religious experience, which can be dismissed, undermined or invalidated with appeals to scientific knowledge.

While we have consistently tried to distinguish between scientific knowledge and philosophical speculation based on this knowledge-there is no empirically valid causal linkage between the former and the latter. Those who wish to dismiss the speculative assumptions as its basis to be drawn the obvious freedom of which if firmly grounded in scientific theory and experiments there is, however, in the scientific description of nature, the belief in radical Cartesian division between mind and world sanctioned by classical physics. Seemingly clear, that this separation between mind and world was a macro-level illusion fostered by limited awarenesses of the actual character of physical reality and by mathematical idealization extended beyond the realm of their applicability.

Thus, the grounds for objecting to quantum theory, the lack of a one-to-one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strictly scientific terms. After all, the completeness of all previous physical theories was measured against the criterion with enormous success. Since it was this success that gave physics the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more comprehensive quantum theory will emerge to insist on these requirements.

All indications are, however, that no future theory can circumvent quantum indeterminancy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness or physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.

If a theory does so and continues to do so, which is clearly the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy per se is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationship in classical physics between ‘theory’ and ‘physical reality’.

In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave function, and then taking the square of the amplitude. In the two-slit experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the absolute value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, we would simply add the probabilities of the two alternate ways and let it go at that. The classical procedure does not work here, because we are not dealing with classical atoms. In quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’.

The superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum. As opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:


ψ1 + ψ2
2 ≠
ψ1
2 +
ψ2
2

Where ψ1 and ψ2 are the individual wave functions. On the left-hand side, the superposition principle results in extra terms that cannot be found on the right-hand side. The left-hand side of the above relations is the way a quantum physicist would compute probabilities, and the right-hand side is the classical analogue. In quantum theory, the right-hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left-hand side of the above relations would not be there, and the peculiar wavellite interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like a bullet, and the final probability would be the sum of the individual probabilities. Nonetheless, once we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.

In order to give a full account of quantum recipes for computing probabilities, one has to examine what would happen in events that are compound. Compound events are ‘events that can be broken down into a series of steps, or events that consists of a number of things happening independently.’ The recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.

The quantum recipe is
ψ1 • ψ2
2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus, the recipes of computing results in quantum theory and classical physics can be totally different. The quantum superposition effects are completely nonclassical, and there is no mathematical justification per se why the quantum recipes work. What justifies the use of quantum probability theory is the coming thing that justifies the use of quantum physics it has allowed us in countless experiments to extend our ability to co-ordinate experience with the expansive nature of unity.

A departure from the classical mechanics of Newton involving the principle that certain physical quantities can only assume discrete values. In quantum theory, introduced by Planck ( 1900 ), certain conditions are imposed on these quantities to restrict their value; the quantities are then said to be ‘quantized’.

Up to 1900, physics was based on Newtonian mechanics. Large-scale systems are usually adequately described, however, several problems could not be solved, in particular, the explanation of the curves of energy against wavelengths for ‘black-body radiation’, with their characteristic maximum, as these attemptive efforts were afforded to endeavour upon the base-cases, on which the idea that the enclosure producing the radiation contained a number of ‘standing waves’ and that the energy of an oscillator if ‘kT’, where ‘k’ in the ‘Boltzmann Constant’ and ‘T’ the thermodynamic temperature. It is a consequence of classical theory that the energy does not depend on the frequency of the oscillator. This inability to explain the phenomenons has been called the ‘ultraviolet catastrophe’.

Planck tackled the problem by discarding the idea that an oscillator can attain or decrease energy continuously, suggesting that it could only change by some discrete amount, which he called a ‘quantum.’ This unit of energy is given by ‘hv’ where ‘v’ is the frequency and ‘h’ is the ‘Planck Constant,’ ‘h’ has dimensions of energy ‘x’ times of action, and was called the ‘quantum of action.’ According to Planck an oscillator could only change its energy by an integral number of quanta, i.e., by hv, 2hv, 3hv, etc. This meant that the radiation in an enclosure has certain discrete energies and by considering the statistical distribution of oscillators with respect to their energies, he was able to derive the Planck Radiation Formulas. The formulae contrived by Planck, to express the distribution of dynamic energy in the normal spectrum of ‘black-body’ radiation. It is usual form is:

8πchdλ/λ 5 ( exp[ch / kλT] ‒ 1.

Which represents the amount of energy per unit volume in the range of wavelengths between λ and λ + dλ? ‘c’ = the speed of light and ‘h’ = the Planck constant, as ‘k’ = the Boltzmann constant with ‘T’ equalling thermodynamic temperatures.

The idea of quanta of energy was applied to other problems in physics, when in 1905 Einstein explained features of the ‘Photoelectric Effect’ by assuming that light was absorbed in quanta (photons). A further advance was made by Bohr(1913) in his theory of atomic spectra, in which he assumed that the atom can only exist in certain energy states and that light is emitted or absorbed as a result of a change from one state to another. He used the idea that the angular momentum of an orbiting electron could only assume discrete values, i.e., was quantized? A refinement of Bohr’s theory was introduced by Sommerfeld in an attempt to account for fine structure in spectra. Other successes of quantum theory were its explanations of the ‘Compton Effect’ and ‘Stark Effect.’ Later developments involved the formulation of a new system of mechanics known as ‘Quantum Mechanics.’

What is more, in furthering to Compton’s scattering was to an interaction between a photon of electromagnetic radiation and a free electron, or other charged particles, in which some of the energy of the photon is transferred to the particle. As a result, the wavelength of the photon is increased by amount Δλ. Where:

Δλ = ( 2h / m0 c ) sin 2 ½.

This is the Compton equation, ‘h’ is the Planck constant, m0 the rest mass of the particle, ‘c’ the speed of light, and the photon angle between the directions of the incident and scattered photons. The quantity ‘h/m0c’ and is known to be the ‘Compton Wavelength,’ symbol λC, which for an electron is equal to 0.002 43 nm.

The outer electrons in all elements and the inner ones in those of low atomic number have ‘binding energies’ negligible compared with the quantum energies of all except very soft X-and gamma rays. Thus most electrons in matter are effectively free and at rest and so cause Compton scattering. In the range of quantum energies 105 to 107 electro volts, this effect is commonly the most important process of attenuation of radiation. The scattering electron is ejected from the atom with large kinetic energy and the ionization that it causes plays an important part in the operation of detectors of radiation.

In the ‘Inverse Compton Effect’ there is a gain in energy by low-energy photons as a result of being scattered by free electrons of much higher energy. As a consequence, the electrons lose energy. Whereas, the wavelength of light emitted by atoms is altered by the application of a strong transverse electric field to the source, the spectrum lines being split up into a number of sharply defined components. The displacements are symmetrical about the position of the undisplaced lines, and are prepositional of the undisplaced line, and are propositional to the field strength up to about 100 000 volts per. cm. (The Stark Effect).

Adjoined alongside with quantum mechanics, is an unstretching constitution taken advantage of forwarded mathematical physical theories-growing from Planck’s ‘Quantum Theory’ and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject development in several mathematical forms, including ‘Wave Mechanics’ (Schrödinger) and ‘Matrix Mechanics’ (Born and Heisenberg), all of which are equivalent.

In quantum mechanics, it is often found that the properties of a physical system, such as its angular moment and energy, can only take discrete values. Where this occurs the property is said to be ‘quantized’ and its various possible values are labelled by a set of numbers called quantum numbers. For example, according to Bohr’s theory of the atom, an electron moving in a circular orbit could occupy any orbit at any distance from the nucleus but only an orbit for which its angular momentum (mvr) was equal to nh/2π, where ‘n’ is an integer (0, 1, 2, 3, etc.) and ‘h’ is the Planck’s constant. Thus the property of angular momentum is quantized and ‘n’ is a quantum number that gives its possible values. The Bohr theory has now been superseded by a more sophisticated theory in which the idea of orbits is replaced by regions in which the electron may move, characterized by quantum numbers ‘n’, ‘I’, and ‘m’.

Literally (`913) tis was the first significant application of the quantum theory of atomic structure. Although the theory has been replaced in the effort of a mathematical physical theory that grew out of Plank’s quantum tho ry and deals with the mechanics of atomic and related systems in terms of quantities that can be measured. The subject developed in several mathematical forms, including ‘wave mechanics, and ‘matrix mechanics’ all of which are equivalent.

A mathematical formulation of the special theory of relativity was given by Minkowski. It is based on the idea that an event is specified by four coordaites: three spacial coordinate and one of time coordinate, these coordinates define a four-dimensional space and the motion of s particle can be described by a curvature in this space, which is called Minkowski space-time.

The equivalence between a gravitational field and the fictitious forces in non-inertial systems can be expressed by using Riemannian space-time, which differs from the Minkowski space-time of the special theory. In special relativity the motion of a pace that is not acted on any forces is represented by a straight line in Minkowski space-time. Overall relativity, using Riemannian space-time, the motion is represented by a line that is no longer straight (in the Euclidean sense) but is the line given the shortest distance. Since a line is called a ‘geodesic’, thus space-time is said to be curved. Nonetheless, the extent of this curvature is given bit the metric tensor for space-time, the components of which ae solutions to Einstein’s field equations. The fact that gravitational affected occur near masses is introduced by the postulate that the presence of matter produces this curvature of space-time. This curvature of space-time controls the natural motions of bodies.

The predictions of general relativity only differ from Newton’s theory by small amounts and most tests of the theory have been carries out through observations in astronomy. For example, it explains the shift in the perihelion of Mercury, the bending of light or other electromagnetic radiation in the presence of large bodies, and the Einstein shift. Which of a small resift in the lines of a stellar spectrum caused by the gravitational potential at the level in the star at which the radiations is emitted (for a bright lone) or absorbed (for a dark line). This shift can be explained in terms of either the speed or general theory of relativity. In the simplest terms a quantum of energy hv has mass hv/c2. On moving between two points with gravitational potential difference Φ. The work done is Φhv/c2 so the change of frequency δv is Φv/c2.

Thus and so, the theory of the atom in particle the simplest atom, that of hydrogen, consisting of a nucleus and one electron. It was assumed that there could be a ground state in which an isolatd atom would remain permanently, and short -lived states of higher energy to which the atom could be excited by collisions or absorption of radiation. It was supposed that radiation was emitted or absorbed in quanta or energy equal to integral multiplied of ‘hv’, where ‘h’ is the Planck constant and ‘v’ is the frequency of the electromagnetic waves. (Later it as realized that a single quantum has the unique value hv). The frequency of radiation emitted on capturing a free electron into the nth state (where –1 for the ground state) was supposed to be nh/2 times the rotational frequency of the electron in a circular orbit. This idea to, and was replaced by, the concept that the angular momentum of orbited is quantized in terms of h/2π. The energy of the nth state was found to be given by:

En= me4/8h2 ε0 2n2

Where ‘m’ is the reduced mass of the electron. This formula gave excellent agreement with the then known series of lines in the visible and infrared regions of the spectrum of atomic hydrogen and predicted a series in the ultraviolet that was soon to be found by Lyman.

The extension of the theory to more complicated atoms had success but raised innumerable difficulties, which were only resolved by the development of wave mechanics.

Am allowed wave function of an electron in an atom obtained by a solution of the Schrödinger wave equations? In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2.r, where ‘e’ is the electron charge and ‘r’ its distance from the nucleus. A precise orbit cannot be considered as in Bohr’s theory of the atom but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2dt is the probability of locating the electron in the element of volume ‘dt’.

Solution of Schrödinger’s equation for the hydrogen atom shoes that the electron can only have certain allowed wave functions (eigenfunctions). Each of these corresponds to a probability distinction in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of the energy ‘E’. There allowed wave function, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the earlier quantum theory of the atom.

‘N’ the principle quantum number, can have values 1, 2, 3, etc. the orbital with n = 1 has the lowest energy. The state of the electron with n = 2, 2, 3, etc., are called shells and designated the K, L, M shells, etc., I the azimuthal quantum number, which for a given value of ‘n’ can have values 0, 1, 2, . . . (–1). Thus when n = 1, I can only have the value 0. An electron in the L shell of an atom with n = 2 can occupy two Subshell of different energy corresponding to 1 = o and I = 1. Similarly the M shell (–3) has three Subshell with I =0, I =1, and I = 2. Orbitals with I = 0, 1, 2 and 3 are called s,p,d, and f orbitals respectfully. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital angular momentum of an electron is given by:

√[I(I + 1)(h/2π)]

The Bohr Theory of the Atom (1913) introduced the concept that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electron states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon th development of wave mechanics after 1925.

According to modern theories, an electron does des not follow a determinate orbit as envisaged by Bohr but in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum members, and, according to Pauli exclusion principle, not more than one election can be in a given state.

An exact calculation of the energies and other properties of the quantum states is only possible for the simplest atoms but there are various approximate methods that give useful results. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. Th e outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. Other information may be obtained from magnetism, and chemical properties

Properties of [Standard] elementary particles are also described by quantum numbers. For example, an electron has the property known a ‘spin’, and can exist in two possible energy states depending on whether this spin set parallel or antiparallel to a certain direction. The two states are conveniently characterized by quantum numbers + ½ and ‒ ½. Similarly properties such as charge, Isospin, strangeness, parity and hyper-charge are characterized by quantum numbers. In interactions between particles, a particular quantum number may be conserved, i.e., the sum of the quantum numbers of the particles before and after the interaction remains the same. It is the type of interaction-strong electromagnetic, weak that determines whether the quantum number is conserved.

Bohr discovered that if you use Planck’s constant in combination with the known mass and charge of the electron, the approximate size of the hydrogen atom could be derived. Assuming that a jumping electron absorbs or emits energy in units of Planck’s constant, in accordance with the formula Einstein used to explain the photoelectric effect, Bohr was able to find correlations with the special spectral lines for hydrogen. More important, the model also served to explain why the electron does not, as electromagnetic theory says it should, radiate its energy quickly away and collapse into the nucleus.

Bohr reasoned that this does not occur because the orbits are quantized-electrons absorb and emit energy corresponding to the specific orbits. Their lowest energy state, or lowest orbit, is the ground state. What is notable, however, is that Bohr, although obliged to use macro-level analogies and classical theory, quickly and easily posits a view of the functional dynamic of the energy shells of the electron that has no macro-level analogy and is inexplicable within th framework of classical theory.

The central problem with Bohr’s model from the perspective of classical theory was pointed out by Rutherford shortly before the first of the papers describing the model was published. ‘There appears to me,’ Rutherford wrote in a letter to Bohr, ‘one grave problem in your hypothesis that I have no doubt you fully realize, namely, how does an electron decide what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.’ Viewing the electron as atomic in the Greek sense, or as a point-like object that moves, there is cause to wonder, in the absence of a mechanistic explanation, how this object instantaneously ‘jumps’ from one shell or orbit to another. It was essentially efforts to answer this question that led to the development of quantum theory.

The effect of Bohr’s model was to raise more questions than it answered. Although the model suggested that we can explain the periodic table of th elements by assuming that a maximum number of electrons are found in each shell, Bohr was not able to provide any mathematical acceptable explanation for the hypothesis. That explanation was provided in 1925 by Wolfgang Pauli, known throughout his career for his extraordinary talents as a mathematician.

Bohr had used three quantum numbers in his models-Planck’s constant, mass, and charge. Pauli added a fourth, described as spin, which was initially represented with the macro-level analogy of a spinning ball on a pool table. Rather predictably, th analogy does not work. Whereas a classical spin can point in any direction, a quantum mechanical spin points either up or down along the axis of measurement. In total contrast to the classical notion of a spinning ball, we cannot even speak of the spin of the particle if no axis is measured.

When Pauli added this fourth quantum number, he found a correspondence between the number of electrons in each full shell of atoms and the new set of quantum numbers describing the shell. This became the basis for what we now call the ‘Pauli exclusion principle’. The principle is simple and yet quite startling-two electrons cannot have all their quantum numbers the same, and no two actual electrons are identical in te sense of having the same quantum number. The exclusion principle explains mathematically why there is a maximum number of electrons in the shell of any given atom. If the shell is full, adding another electron would be impossible because this would result in two electrons in the shell having the same quantum number.

This may sound a bit esoteric, but the fact that nature obeys the exclusion principle is quite fortunate from our point of view. If electrons did not obey the principle, all elements would exist at the ground state and there would be no chemical affinity between them. Structures like crystals and DNA would not exist, and the only structure allows for chemical bonds, which, in turn, result in the hierarchy of strictures from atoms, molecules, cells, plants, and animals.

The energy associated with a quantum state of an atom or other system that is fixed, or determined, by given set quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect accorded to: (i) the energy of a given state may be changed by externally applied fields (ii) there may be a number of states of equal energy in the system.

The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effected of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime hence, the energy, in principle is exactly determinate, the energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculation. Due to de Broglie and extended by Schrödinger, Dirac and many others, it (wave mechanics originated in the suggestion that light consists of corpuscles as well as of waves and the consequent suggestion that all [standard] elementary particles are associated with waves. Wave mechanics are based on the Schrödinger wave equation describing the wave properties of matter. It relates the energy of a system to wave function, usually, it is found that a system, such as an atom or molecule can only have certain allowed wave functions (eigenfunction) and certain allowed energies (Eigenvalues), in wave mechanics the quantum conditions arise in a natural way from the basic postulates as solutions of the wave equation. The energies of unbound states of positive energy form a continuum. This gives rise to the continuum background to an atomic spectrum as electrons are captured from unbound states. The energy of an atom state sustains essentially by some changes by the ‘Stark Effect’ or the ‘Zeeman Effect’.

The vibrational energies of the molecule also have discrete values, for example, in a diatomic molecule the atom oscillates in the line joining them. There is an equilibrium distance at which the force is zero. The atoms repulse when closer and attract when further apart. The restraining force is nearly prepositional to the displacement hence, the oscillations are simple harmonic. Solution of the Schrödinger wave equation gives the energies of a harmonic oscillation as:

En = ( n + ½ ) h.

Where ‘h’ is the Planck constant,  is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is not zero but ½ h. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the ‘Morse Equation,’ which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.

The rotational energy of a molecule is quantized also, according to the Schrödinger equation, a body with the moment of inertial I about the axis of rotation have energies given by:

EJ = h2J ( J + 1 ) / 8π 2I.

Where J is the rotational quantum number, which can be zero or a positive integer. Rotational energies originate from band spectra.

The energies of the state of the nucleus are determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons because the interactions of nucleons are very complicated. The energies are very little affected by external influence but the ‘Mössbauer Effect’ has permitted the observations of some minute changes.

In quantum theory, introduced by Max Planck 1858-1947 in 1900, was the first serious scientific departure from Newtonian mechanics. It involved supposing that certain physical quantities can only assume discrete values. In the following two decades it was applied successfully by Einstein and the Danish physicist Neils Bohr (1885-1962). It was superseded by quantum mechanics in the tears following 1924, when the French physicist Louis de Broglie (1892-1987) introduced the idea that a particle may also be regarded as a wave. A set of waves that represent the behaviour, under appropriate conditions, of a particle (e.g., its diffraction by a crystal lattice). The wavelength is given by the de Broglie equation. They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer experiment. The Schrödinger wave equation relates the energy of a system to a wave function, the energy of a system to a wave function, the square of the amplitude of the wave is proportional to the probability of a particle being found in a specific position. The wave function expresses the lack of possibly of defining both the position and momentum of a particle, this expression of discrete representation is called as the ‘uncertainty principle,’ the allowed wave functions that have described stationary states of a system

Part of the difficulty with the notions involved is that a system may be in an indeterminate state at a time, characterized only by the probability of some result for an observation, but then ‘become’ determinate (the collapse of the wave packet) when an observation is made such as the position and momentum of a particle if that is to apply to reality itself, than to mere indetermincies of measurement. It is as if there is nothing but a potential for observation or a probability wave before observation is made, but when an observation is made the wave becomes a particle. The wave-particle duality seems to block any way of conceiving of physical reality-in quantum terms. In the famous two-slit experiment, an electron is fired at a screen with two slits, like a tennis ball thrown at a wall with two doors in it. If one puts detectors at each slit, every electron passing the screen is observed to go through exactly one slit. When the detectors are taken away, the electron acts like a wave process going through both slits and interfering with itself. A particle such an electron is usually thought of as always having an exact position, but its wave is not absolutely zero anywhere, there is therefore a finite probability of it ‘tunnelling through’ from one position to emerge at another.

The unquestionable success of quantum mechanics has generated a large philosophical debate about its ultimate intelligibility and it’s metaphysical implications. The wave-particle duality is already a departure from ordinary ways of conceiving of tings in space, and its difficulty is compounded by the probabilistic nature of the fundamental states of a system as they are conceived in quantum mechanics. Philosophical options for interpreting quantum mechanics have included variations of the belief that it is at best an incomplete description of a better-behaved classical underlying reality ( Einstein ), the Copenhagen interpretation according to which there are no objective unobserved events in the micro-world (Bohr and W. K. Heisenberg, 1901- 76), an ‘acausal’ view of the collapse of the wave packet (J. von Neumann, 1903-57), and a ‘many world’ interpretation in which time forks perpetually toward innumerable futures, so that different states of the same system exist in different parallel universes (H. Everett).

In recent tars the proliferation of subatomic particles, such as there are 36 kinds of quarks alone, in six flavours to look in various directions for unification. One avenue of approach is superstring theory, in which the four-dimensional world is thought of as the upshot of the collapse of a ten-dimensional world, with the four primary physical forces, one of gravity another is electromagnetism and the strong and weak nuclear forces, becoming seen as the result of the fracture of one primary force. While the scientific acceptability of such theories is a matter for physics, their ultimate intelligibility plainly requires some philosophical reflection.

A theory of gravitation that is consistent with quantum mechanics whose subject, still in its infancy, has no completely satisfactory theory. In controventional quantum gravity, the gravitational force is mediated by a massless spin-2 particle, called the ‘graviton’. The internal degrees of freedom of the graviton require (hij)(χ) represent the deviations from the metric tensor for a flat space. This formulation of general relativity reduces it to a quantum field theory, which has a regrettable tendency to produce infinite for measurable qualitites. However, unlike other quantum field theories, quantum gravity cannot appeal to renormalizations procedures to make sense of these infinites. It has been shown that renormalization procedures fail for theories, such as quantum gravity, in which the coupling constants have the dimensions of a positive power of length. The coupling constant for general relativity is the Planck length,

Lp = ( Gh / c3 )½ ≡ 10 ‒35 m.

Supersymmetry has been suggested as a structure that could be free from these pathological infinities. Many theorists believe that an effective superstring field theory may emerge, in which the Einstein field equations are no longer valid and general relativity is required to appar only as low energy limit. The resulting theory may be structurally different from anything that has been considered so far. Supersymmetric string theory (or superstring) is an extension of the ideas of Supersymmetry to one-dimensional string-like entities that can interact with each other and scatter according to a precise set of laws. The normal modes of super-strings represent an infinite set of ‘normal’ elementary particles whose masses and spins are related in a special way. Thus, the graviton is only one of the string modes-when the string-scattering processes are analysed in terms of their particle content, the low-energy graviton scattering is found to be the same as that computed from Supersymmetric gravity. The graviton mode may still be related to the geometry of the space-time in which the string vibrates, but it remains to be seen whether the other, massive, members of the set of ‘normal’ particles also have a geometrical interpretation. The intricacy of this theory stems from the requirement of a space-time of at least ten dimensions to ensure internal consistency. It has been suggested that there are the normal four dimensions, with the extra dimensions being tightly ‘curled up’ in a small circle presumably of Planck length size.

In the quantum theory or quantum mechanics of an atom or other system fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that an atom can assume. The conceptual representation of an atom was first introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made very much more precisely by theory and excrement in the late-19th and 20th centuries.

Following the discovery of the electron (1897), it was recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly, but all the mass of an atom is concentrated at its centre in a region of positive charge, the nucleus, the radius of the order 10 -15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’ and is surrounded by ‘Z’ electrons (Z is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the development of the quantum theory.

The ‘Bohr Theory of the Atom’, 1913, introduced the concept that an electron in an atom is normally in a state of lower energy, or ground state, in which it remains indefinitely unless disturbed. By absorption of electromagnetic radiation or collision with another particle the atom may be excited-that is an electron is moved into a state of higher energy. Such excited states usually have short lifetimes, typically nanoseconds and the electron returns to the ground state, commonly by emitting one or more quanta of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Attempts were made to improve the theory by postulating elliptic orbits (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics,’ after 1925.

According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of a wave equation. This determines the probability that the electron may be located in a given element of volume. Each state is characterized by a set of four quantum numbers, and, according to the Pauli exclusion principle, not more than one electron can be in a given state.

The Pauli exclusion principle states that no two identical ‘fermions’ in any system can be in the same quantum state that is have the same set of quantum numbers. The principle was first proposed (1925) in the form that not more than two electrons in an atom could have the same set of quantum numbers. This hypothesis accounted for the main features of the structure of the atom and for the periodic table. An electron in an atom is characterized by four quantum numbers, n, I, m, and S. A particular atomic orbital, which has fixed values of n, I, and m, can thus contain a maximum of two electrons, since the spin quantum number ‘s’ can only be +
or-
. In 1928 Sommerfeld applied the principle to the free electrons in solids and his theory has been greatly developed by later associates.

Additionally, an effect occurring when atoms emit or absorb radiation in the presence of a moderately strong magnetic field. Each spectral; Line is split into closely spaced polarized components, when the source is viewed at right angles to the field there are three components, the middle one having the same frequency as the unmodified line, and when the source is viewed parallel to the field there are two components, the undisplaced line being preoccupied. This is the ‘normal’ Zeeman Effect. With most spectral lines, however, the anomalous Zeeman effect occurs, where there are a greater number of symmetrically arranged polarized components. In both effects the displacement of the components is a measure of the magnetic field strength. In some cases the components cannot be resolved and the spectral line appears broadened.

The Zeeman effect occurs because the energies of individual electron states depend on their inclination to the direction of the magnetic field, and because quantum energy requirements impose conditions such that the plane of an electron orbit can only set itself at certain definite angles to the applied field. These angles are such that the projection of the total angular momentum on the field direction in an integral multiple of h/2π (h is the Planck constant). The Zeeman effect is observed with moderately strong fields where the precession of the orbital angular momentum and the spin angular momentum of the electrons about each other is much faster than the total precession around the field direction. The normal Zeeman effect is observed when the conditions are such that the Landé factor is unity, otherwise the anomalous effect is found. This anomaly was one of the factors contributing to the discovery of electron spin.

Statistics that are concerned with the equilibrium distribution of elementary particles of a particular type among the various quantized energy states. It is assumed that these elementary particles are indistinguishable. The ‘Pauli Exclusion Principle’ is obeyed so that no two identical ‘fermions’ can be in the same quantum mechanical state. The exchange of two identical fermions, i.e., two electrons, does not affect the probability of distribution but it does involve a change in the sign of the wave function. The ‘Fermi-Dirac Distribution Law’ gives E the average number of identical fermions in a state of energy E:



E = 1/[eα + E/kT + 1],

Where ‘k’ is the Boltzmann constant, ‘T’ is the thermodynamic temperature and α is a quantity depending on temperature and the concentration of particles. For the valences electrons in a solid, ‘α’ takes the form-E1/kT, where E1 is the Fermi level. Whereby, the Fermi level (or Fermi energy) E F the value of E is exactly one half. Thus, for a system in equilibrium one half of the states with energy very nearly equal to ‘E’ (if any) will be occupied. The value of EF varies very slowly with temperatures, tending to E0 as ‘T’ tends to absolute zero.

In Bose-Einstein statistics, the Pauli exclusion principle is not obeyed so that any number of identical ‘bosons’ can be in the same state. The exchanger of two bosons of the same type affects neither the probability of distribution nor the sign of the wave function. The ‘Bose-Einstein Distribution Law’ gives E the average number of identical bosons in a state of energy E:



E = 1/[eα + E/kT-1].

The formula can be applied to photons, considered as quasi-particles, provided that the quantity α, which conserves the number of particles, is zero. Planck’s formula for the energy distribution of ‘Black-Body Radiation’ was derived from this law by Bose. At high temperatures and low concentrations both the quantum distribution laws tend to the classical distribution:



E = Ae-E/kT.

Additionally, the property of substances that have a positive magnetic ‘susceptibility’, whereby its quantity μr ‒ 1, and where μr is ‘Relative Permeability,’ again, that the electric-quantity presented as Єr ‒ 1, where Єr is the ‘Relative Permittivity,’ all of which has positivity. All of which are caused by the ‘spins’ of electrons, paramagnetic substances having molecules or atoms, in which there are paired electrons and thus, resulting of a ‘Magnetic Moment.’ There is also a contribution of the magnetic properties from the orbital motion of the electron, as the relative ‘permeability’ of a paramagnetic substance is thus greater than that of a vacuum, i.e., it is greater than unity.

A ‘paramagnetic substance’ is regarded as an assembly of magnetic dipoles that have random orientation. In the presence of a field the magnetization is determined by competition between the effect of the field, in tending to align the magnetic dipoles, and the random thermal agitation. In small fields and high temperatures, the magnetization produced is proportional to the field strength, wherefore at low temperatures or high field strengths, a state of saturation is approached. As the temperature rises, the susceptibility falls according to Curie’s Law or the Curie-Weiss Law.

Furthering by Curie’s Law, the susceptibility (χ) of a paramagnetic substance is unversedly proportional to the ‘thermodynamic temperature’ (T): χ = C/T. The constant ’C is called the ‘Curie constant’ and is characteristic of the material. This law is explained by assuming that each molecule has an independent magnetic ‘dipole’ moment and the tendency of the applied field to align these molecules is opposed by the random moment due to the temperature. A modification of Curie’s Law, followed by many paramagnetic substances, where the Curie-Weiss law modifies its applicability in the form:

χ = C/(T ‒ θ ).

The law shows that the susceptibility is proportional to the excess of temperature over a fixed temperature θ: ‘θ’ is known as the Weiss constant and is a temperature characteristic of the material, such as sodium and potassium, also exhibit type of paramagnetic resulting from the magnetic moments of free, or nearly free electrons, in their conduction bands? This is characterized by a very small positive susceptibility and a very slight temperature dependence, and is known as ‘free-electron paramagnetism’ or ‘Pauli paramagnetism’.

A property of certain solid substances that having a large positive magnetic susceptibility having capabilities of being magnetized by weak magnetic fields. The chief elements are iron, cobalt, and nickel and many ferromagnetic alloys based on these metals also exist. Justifiably, ferromagnetic materials exhibit magnetic ‘hysteresis’, of which formidable combination of decaying within the change of an observed effect in response to a change in the mechanism producing the effect. (Magnetic) a phenomenon shown by ferromagnetic substances, whereby the magnetic flux through the medium depends not only on the existing magnetizing field, but also on the previous state or states of the substances, the existence of a phenomenon necessitates a dissipation of energy when the substance is subjected to a cycle of magnetic changes, this is known as the magnetic hysteresis loss. The magnetic hysteresis loops were acceding by a curved obtainability from ways of which, in themselves were of plotting the magnetic flux density ‘B’, of a ferromagnetic material against the responding value of the magnetizing field ’H’, the area to the ‘hysteresis loss’ per unit volume in taking the specimen through the prescribed magnetizing cycle. The general forms of the hysteresis loop fore a symmetrical cycle between ‘H’ and ‘- H’ and ‘H-h, having inclinations that rise to hysteresis.

The magnetic hysteresis loss commands the dissipation of energy as due to magnetic hysteresis, when the magnetic material is subjected to changes, particularly, the cycle changes of magnetization, as having the larger positive magnetic susceptibility, and are capable of being magnetized by weak magnetic fields. Ferro magnetics are able to retain a certain domain of magnetization when the magnetizing field is removed. Those materials that retain a high percentage of their magnetization are said to be hard, and those that lose most of their magnetization are said to be soft, typical examples of hard ferromagnetic are cobalt steel and various alloys of nickel, aluminium and cobalt. Typical soft magnetic materials are silicon steel and soft iron, the coercive force as acknowledged to the reversed magnetic field’ that is required to reduce the magnetic ‘flux density’ in a substance from its remnant value to zero in characteristic of ferromagnetisms and explains by its presence of domains. A ferromagnetic domain is a region of crystalline matter, whose volume may be 10-12 to 10-8 m3, which contains atoms whose magnetic moments are aligned in the same direction. The domain is thus magnetically saturated and behaves like a magnet with its own magnetic axis and moment. The magnetic moment of the ferrometic atom results from the spin of the electron in an unfilled inner shell of the atom. The formation of a domain depends upon the strong interactions forces (Exchange forces) that are effective in a crystal lattice containing ferrometic atoms.

In an unmagnetized volume of a specimen, the domains are arranged in a random fashion with their magnetic axes pointing in all directions so that the specimen has no resultant magnetic moment. Under the influence of a weak magnetic field, those domains whose magnetic saxes have directions near to that of the field flux at the expense of their neighbours. In this process the atoms of neighbouring domains tend to align in the direction of the field but the strong influence of the growing domain causes their axes to align parallel to its magnetic axis. The growth of these domains leads to a resultant magnetic moment and hence, magnetization of the specimen in the direction of the field, with increasing field strength, the growth of domains proceeds until there is, effectively, only one domain whose magnetic axis appropriates to the field direction. The specimen now exhibits tron magnetization. Further, increasing in field strength cause the final alignment and magnetic saturation in the field direction. This explains the characteristic variation of magnetization with applied strength. The presence of domains in ferromagnetic materials can be demonstrated by use of ‘Bitter Patterns’ or by ‘Barkhausen Effect,’ which puts forward, that the magnetization of a ferromagnetic substance does not increase or decrease steadily with steady increase or decrease of the magnetizing field but proceeds in a series of minute jumps. The effect gives support to the domain theory of Ferromagnetism.

For ferromagnetic solids there are a change from ferromagnetic to paramagnetic behaviour above a particular temperature and the paramagnetic material then obeyed the Curie-Weiss Law above this temperature, this is the ‘Curie temperature’ for the material. Below this temperature the law is not obeyed. Some paramagnetic substances, obey the temperature ‘θ C’ and do not obey it below, but are not ferromagnetic below this temperature. The value ‘θ’ in the Curie-Weiss law can be thought of as a correction to Curie’s law reelecting the extent to which the magnetic dipoles interact with each other. In materials exhibiting ‘antiferromagnetism’ of which the temperature ‘θ’ corresponds to the ‘Néel temperature’.

Without discredited inquisitions, the property of certain materials that have a low positive magnetic susceptibility, as in paramagnetism, and exhibit a temperature dependence similar to that encountered in ferromagnetism. The susceptibility increased with temperatures up to a certain point, called the ‘Néel Temperature,’ and then falls with increasing temperatures in accordance with the Curie-Weiss law. The material thus becomes paramagnetic above the Néel temperature, which is analogous to the Curie temperature in the transition from ferromagnetism to paramagnetism. Antiferromagnetism is a property of certain inorganic compounds such as MnO, FeO, FeF2 and MnS. It results from interactions between neighbouring atoms leading and an antiparallel arrangement of adjacent magnetic dipole moments, least of mention. A system of two equal and opposite charges placed at a very short distance apart. The product of either of the charges and the distance between them is known as the ‘electric dipole moments. A small loop carrying a current behaves as a magnetic dipole and is equal to IA, where A being the area of the loop.

The energy associated with a quantum state of an atom or other system that is fixed, or determined by a given set of quantum numbers. It is one of the various quantum states that can be assumed by an atom under defined conditions. The term is often used to mean the state itself, which is incorrect by ways of: (1) the energy of a given state may be changed by externally applied fields, and (2) there may be a number of states of equal energy in the system.

The electrons in an atom can occupy any of an infinite number of bound states with discrete energies. For an isolated atom the energy for a given state is exactly determinate except for the effects of the ‘uncertainty principle’. The ground state with lowest energy has an infinite lifetime, hence the energy is if, in at all as a principle that is exactly determinate. The energies of these states are most accurately measured by finding the wavelength of the radiation emitted or absorbed in transitions between them, i.e., from their line spectra. Theories of the atom have been developed to predict these energies by calculating such a system that emit electromagnetic radiation continuously and consequently no permanent atom would be possible, hence this problem was solved by the developments of quantum theory. An exact calculation of the energies and other particles of the quantum state is only possible for the simplest atom but there are various approximate methods that give useful results as an approximate method of solving a difficult problem, if the equations to be solved, and depart only slightly from those of some problems already solved. For example, the orbit of a single planet round the sun is an ellipse, that the perturbing effect of other planets modifies the orbit slightly in a way calculable by this method. The technique finds considerable application in ‘wave mechanics’ and in ‘quantum electrodynamics’. Phenomena that are not amendable to solution by perturbation theory are said to be non-perturbative.

The energies of unbound states of positive total energy form a continuum. This gives rise to the continuos background to an atomic spectrum, as electrons are captured from unbound state, the energy of an atomic state can be changed by the ‘Stark Effect’ or the ‘Zeeman Effect.’

The vibrational energies of molecules also have discrete values, for example, in a diatomic molecule the atoms oscillate in the line joining them. There is an equilibrium distance at which the force is zero, and the atoms deflect when closer and attract when further apart. The restraining force is very nearly proportional to the displacement, hence the oscillations are simple harmonic. Solution of the ‘Schrödinger wave equation’ gives the energies of a harmonic oscillation as:

En = ( n + ½ ) hƒ

Where ‘h’ is the Planck constant, ƒ is the frequency, and ‘n’ is the vibrational quantum number, which can be zero or any positive integer. The lowest possible vibrational energy of an oscillator is thus not zero but ½hƒ. This is the cause of zero-point energy. The potential energy of interaction of atoms is described more exactly by the Morse equation, which shows that the oscillations are anharmonic. The vibrations of molecules are investigated by the study of ‘band spectra’.

The rotational energy of a molecule is quantized also, according to the Schrödinger equation a body with moments of inertia I about the axis of rotation have energies given by:

Ej = h2J(J + 1 )/8π2 I,

Where ‘J’ is the rotational quantum number, which can be zero or a positive integer. Rotational energies are found from ‘band spectra’.

The energies of the states of the ‘nucleus’ can be determined from the gamma ray spectrum and from various nuclear reactions. Theory has been less successful in predicting these energies than those of electrons in atoms because the interactions of nucleons are very complicated. The energies are very little affected by external influences, but the ‘Mössbauer Effect’ has permitted the observation of some minute changes.

When X-rays are scattered by atomic centres arranged at regular intervals, interference phenomena occur, crystals providing grating of a suitable small interval. The interference effects may be used to provide a spectrum of the beam of X-rays, since, according to ‘Bragg’s Law,’ the angle of reflection of X-rays from a crystal depends on the wavelength of the rays. For lower-energy X-rays mechanically ruled grating can be used. Each chemical element emits characteristic X-rays in sharply defined groups in more widely separated regions. They are known as the K, L’s, M, N. etc., promote lines of any series toward shorter wavelengths as the atomic number of the elements concerned increases. If a parallel beam of X-rays, wavelength λ, strikes a set of crystal planes it is reflected from the different planes, interferences occurring between X-rays reflect from adjacent planes. Bragg’s Law states that constructive interference takes place when the difference in path-lengths, BAC, is equal to an integral number of wavelengths

2d sin θ = nλ,

In which ‘n’ is an integer, ‘d’ is the interplanar distance, and ‘θ’ is the angle between the incident X-ray and the crystal plane. This angle is called the ‘Bragg’s Angle,’ and a bright spot will be obtained on an interference pattern at this angle. A dark spot will be obtained, however, if be 2d sin θ = mλ. Where ‘m’ is half-integral. The structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces.

A concept originally introduced by the ancient Greeks, as a tiny indivisible component of matter, developed by Dalton, as the smallest part of an element that can take part in a chemical reaction, and made experiment in the late-19th and early 20th century. Following the discovery of the electron (1897), they recognized that atoms had structure, since electrons are negatively charged, a neutral atom must have a positive component. The experiments of Geiger and Marsden on the scattering of alpha particles by thin metal foils led Rutherford to propose a model (1912) in which nearly all mass of the atom is concentrated at its centre in a region of positive charge, the nucleus is a region of positive charge, the nucleus, radiuses of the order 10-15 metre. The electrons occupy the surrounding space to a radius of 10-11 to 10-10 m. Rutherford also proposed that the nucleus have a charge of ‘Ze’, is surrounded by ‘Z’ electrons (‘Z’ is the atomic number). According to classical physics such a system must emit electromagnetic radiation continuously and consequently no permanent atom would be possible. This problem was solved by the developments of the ‘Quantum Theory.’

The ‘Bohr Theory of the Atom’ (1913) introduced the notion that an electron in an atom is normally in a state of lowest energy (ground state) in which it remains indefinitely unless disturbed by absorption of electromagnetic radiation or collision with other particle the atom may be excited-that is, electrons moved into a state of higher energy. Such excited states usually have short life spans (typically nanoseconds) and the electron returns to the ground state, commonly by emitting one or more ‘quanta’ of electromagnetic radiation. The original theory was only partially successful in predicting the energies and other properties of the electronic states. Postulating elliptic orbits made attempts to improve the theory (Sommerfeld 1915) and electron spin (Pauli 1925) but a satisfactory theory only became possible upon the development of ‘Wave Mechanics’ 1925.

According to modern theories, an electron does not follow a determinate orbit as envisaged by Bohr, but is in a state described by the solution of the wave equation. This determines the ‘probability’ that the electron may be found in a given element of volume. A set of four quantum numbers has characterized each state, and according to the ‘Pauli Exclusion Principle’, not more than one electron can be in a given state.

An exact calculation of the energies and other properties of the quantum states is possible for the simplest atoms, but various approximate methods give useful results, i.e., as an approximate method of solving a difficult problem if the equations to be solved and depart only slightly from those of some problems already solved. The properties of the innermost electron states of complex atoms are found experimentally by the study of X-ray spectra. The outer electrons are investigated using spectra in the infrared, visible, and ultraviolet. Certain details have been studied using microwaves. As administered by a small difference in energy between the energy levels of the 2P½ states of hydrogen. In accord with Lamb Shift, these levels would have the same energy according to the wave mechanics of Dirac. The actual shift can be explained by a correction to the energies based on the theory of the interaction of electromagnetic fields with matter, in of which the fields themselves are quantized. Yet, other information may be obtained form magnetism and other chemical properties.

Its appearance potential concludes as, (1)the potential differences through which an electron must be accelerated from rest to produce a given ion from its parent atom or molecule. (2) This potential difference multiplied bu the electron charge giving the least energy required to produce the ion. A simple ionizing process gives the ‘ionization potential’ of the substance, for example:

Ar + e ➝ Ar + + 2e.

Higher appearance potentials may be found for multiplying charged ions:

Ar + e ➝ Ar + + + 3r.

The number of protons in a nucleus of an atom or the number of electrons resolving around the nucleus is among some concerns of atomic numbers. The atomic number determines the chemical properties of an element and the element’s position in the periodic table, because of which the clarification of chemical elements, in tabular form, in the order of their atomic number. The elements show a periodicity of properties, chemically similar recurring in a definite order. The sequence of elements is thus broken into horizontal ‘periods’ and vertical ‘groups’ the elements in each group showing close chemical analogies, i.e., in valency, chemical properties, etc. all the isotopes of an element have the same atomic number although different isotopes gave mass numbers.

An allowed ‘wave function’ of an electron in an atom obtained by a solution of the Schrödinger wave equation. In a hydrogen atom, for example, the electron moves in the electrostatic field of the nucleus and its potential energy is -e2, where ‘e’ is the electron charge. ‘r’ its distance from the nucleus, as a precise orbit cannot be considered as in Bohr’s theory of the atom, but the behaviour of the electron is described by its wave function, Ψ, which is a mathematical function of its position with respect to the nucleus. The significance of the wave function is that
Ψ
2dt, is the probability of finding the electron in the element of volume ‘dt’.

Solution of Schrödinger’s equation for hydrogen atom shows that the electron can only have certain allowed wave functions (eigenfunction). Each of these corresponds to a probability distribution in space given by the manner in which
Ψ
2 varies with position. They also have an associated value of energy ‘E’. These allowed wave functions, or orbitals, are characterized by three quantum numbers similar to those characterizing the allowed orbits in the quantum theory of the atom: ‘n’, the ‘principle quantum number’, can have values of 1, 2, 3, etc. the orbital with n=1 has the lowest energy. The states of the electron with n=1, 2, 3, etc., are called ‘shells’ and designated the K, L, M shells, etc. ‘I’ the ‘azimuthal quanta number’ which for a given value of ‘n’ can have values of 0, 1, 2, . . . (n ‒1). Similarly, the ’M’ shell (n = 3) has three Subshell with I = 0, I = 1, and I = 2. Orbitals with

1 = 0, 1, 2, and 3 are called s, p, d, and  orbitals respectively. The significance of the I quantum number is that it gives the angular momentum of the electron. The orbital annular momentum of an electron is given by:

√[1(I + 1)(h2π)]

‘m’ the ‘magnetic quanta number’, which for a given value of ‘I’ can have values of -I,-(I ‒ 1), . . . , 0, . . . (I‒ 1). Thus for ‘p’ orbital for which I = 1, there is in fact three different orbitals with m =-1, 0, and 1. These orbitals with the same values of ‘n’ and ‘I ‘ but different ‘m’ values, have the same energy. The significance of this quantum number is that it shows the number of different levels that would be produced if the atom were subjected to an external magnetic field

According to wave theory the electron may be at any distance from the nucleus, but in fact there is only a reasonable chance of it being within a distance of-5 x 1011 metre. In fact the maximum probability occurs when r = a0 where a0 is the radius of the first Bohr orbit. Representing an orbit that there is no arbitrarily decided probability is customary (say 95%) of finding them an electron. Notably taken, is that although ‘s’ orbitals are spherical (I = 0), orbitals with I > 0, have an angular dependence. Finally. The electron in an atom can have a fourth quantum number, ‘M’ characterizing its spin direction. This can be + ½ or ‒ ½ and according to the Pauli Exclusion principle, each orbital can hold only two electrons. The fourth quantum numbers lead to an explanation of the periodic table of the elements.

The least distance in a progressive wave between two surfaces with the same phase arises to a wavelength. If ‘v’ is the phase speed and ‘v’ the frequency, the wavelength is given by v = vλ. For electromagnetic radiation the phase speed and wavelength in a material medium are equal to their values in a free space divided by the ‘refractive index’. The wavelengths of spectral lines are normally specified for free space.

Optical wavelengths are measure absolutely using interferometers or diffraction gratings, or comparatively using a prism spectrometer. The wavelength can only have an exact value for an infinite waver train if an atomic body emits a quantum in the form of a train of waves of duration τ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2cτ, where ‘c’ is the speed in free space. This is associated with the indeterminacy of the energy given by the uncertainty principle.

Whereas, a mathematical quantity analogous to the amplitude of a wave that appears in the equation of wave mechanics, particularly the Schrödinger waves equation. The most generally accepted interpretation is that
Ψ
2dV represents the probability that a particle is within the volume element dV. The wavelengths, as a set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a particle. The wavelength is given by the ‘de Broglie Equation.’ They are sometimes regarded as waves of probability, times the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point. These waves were predicted by de Broglie in 1924 and observed in 1927 in the Davisson-Germer Experiment. Still, ‘Ψ’ is often a might complex quality.

The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which is expressed in terms of electric and magnetic field intensities.

Overall, there are an infinite number of functions satisfying a wave equation but only some of these will satisfy the boundary conditions. ‘Ψ’ must be finite and single-valued at every point, and the spatial derivative must be continuous at an interface? For a particle subject to a law of conservation of numbers, the integral of
Ψ
2dV over all space must remain equal to 1, since this is the probability that it exists somewhere to satisfy this condition the wave equation must be of the first order in (dΨ/dt). Wave functions obtained when these conditions are applied from a set of characteristic functions of the Schrödinger wave equation. These are often called eigenfunctions and correspond to a set of fixed energy values in which the system may exist describe stationary states on the system. For certain bound states of a system the eigenfunctions do not charge the sign or reversing the co-ordinated axes. These states are said to have even parity. For other states the sign changes on space reversal and the parity is said to be odd.

It’s issuing case of eigenvalue problems in physics that take the form:

ΩΨ = λΨ

Where Ω is come mathematical operation (multiplication by a number, differentiation, etc.) on a function Ψ, which is called the ‘eigenfunction’. (λ) is called the ‘eigenvalue’, which in a physical system will be identified with an observable quantity, as, too, an atom to other systems that are fixed, or determined, by a given set of quantum numbers? It is one of the various quantum states that can be assumed by an atom

Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential equations. Each differential equation describes the motion of one of the oscillators in terms of the positions of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed as a simple harmonic motion in time. The differential equations then reduce to ‘3N’ linear equations with 3N unknowns. Where ‘N’ is the number of individual oscillators, each problem is from each one of three degrees of freedom. The whole problem I now easily recast as a ‘matrix’ equation of the form:

χ = ῳ2χ.

Where ‘M’ is an N x N matrix called the ‘a dynamic matrix, χ is an N x 1 column matrix, and ῳ2 of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions’ χ, where are the normal modes of the system, with corresponding eigenvalues ῳ2. As χ can be expressed as a column vector, χ is a vector in some-dimensional vector space. For this reason, χ is also often called an eigenvector.

When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes s and effective simplification of the system. The symmetry principles of group theory, the symmetry operations in any physical system must be posses the properties of the mathematical group. As the group of rotation, both finite and infinite, are important in the analysis of the symmetry of atoms and molecules, which underlie the quantum theory of angular momentum. Eigenvalue problems arise in the quantum mechanics of atomic arising in the quantum mechanics of atomic or molecular systems yield stationary states corresponding to the normal mode oscillations of either electrons in-an atom or atoms within a molecule. Angular momentum quantum numbers correspond to a labelling system used to classify these normal modes, analysing the transitions between them can lead and theoretically predict of atomic or a molecular spectrum. Whereas, the symmetrical principle of group theory can then be applied, from which allow their classification accordingly. In which, this kind of analysis requires an appreciation of the symmetry properties of the molecules (rotations, inversions, etc.) that leave the molecule invariant make up the point group of that molecule. Normal modes sharing the same ῳ eigenvalues are said to correspond to the irreducible representations of these molecules’ point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.

Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable as location momentum energy etc., are represented by operations (differentiations with respect to a variable, multiplication by a variable), which act on wave functions. Wave functioning differs from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measures its energy. For a wave function, the square modulus of its amplitude, at a location χ represents not energy bu probability, i.e., the probability that a particle-a localized packet of energy will be observed in a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detectors events have occurred. A measurement of position of a quantum particle may be written symbolically as:

X Ψ(χ) = χΨ(χ),

Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location ‘χ’,
Ψ (χ)
2 is the probability that the particle will be found in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear superposition of all Ψ (χ) for zero ≤χ ≥ ∞. These principles that hold generally in physics wherever linear phenomena occur. In elasticity, the principle stares that the same strains whether it acts alone accompany each stress or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. In vibrations and wave motion the principle asserts that one set is unaffected by the presence of another set. For example, two sets of ripples on water will pass through one anther without mutual interaction so that, at a particular instant, the resultant distribution at any point traverse by both sets of waves is the sum of the two component disturbances.’

The superposition of two vibrations, y1 and y2, both of frequency , produces a resultant vibration of the same frequency, its amplitude and phase functions of the component amplitudes and phases, that:

y1 = a1 sin(2πt + δ1)

y2 = a2 sin(sin(2πt + δ2)

Then the resultant vibration, y, is given by:

y1 + y2 = A sin(2πt + Δ),

Where amplitude A and phase Δ is both functions of a1, a2, δ1, and δ2.

However, the eigenvalue problems in quantum mechanics therefore represent observable representations as made by possible states (position, in the case of χ) that the quantum system can have to stationary states, of which states that the product of the uncertainty of the resulting value of a component of momentum (pχ) and the uncertainties in the corresponding co-ordinate position (χ) is of the same set-order of magnitude as the Planck Constant. It produces an accurate measurement of position is possible, as a resultant of the uncertainty principle. Subsequently, measurements of the position acquire a spread themselves, which makes the continuos monitoring of the position impossibly.

As in, classical mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called wave mechanics (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that responding to stationary conditions. The matrix forms of quantum mechanics are often matrix mechanics: Born and Heisenberg. Matrices acting of eigenvectors represent the operators.

The relationship between matrix and wave mechanics is similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span on a vector space, which have a matrix representation.

Pauli, in 1925, suggested that each electron could exist in two states with the same orbital motion. Uhlenbeck and Goudsmit interpreted these states as due to the spin of the electron about an axis. The electron is assumed to have an intrinsic angular momentum on addition, to any angular momentum due to its orbital motion. This intrinsic angular momentum is called ‘spin’ It is quantized in values of s(s + 1)h/2π, Where ‘s’ is the ‘spin quantum number’ and ‘h’ the Planck constant. For an electron the component of spin in a given direction can have values of + ½ and – ½, leading to the two possible states. An electron with spin that is behaviourally likens too small magnetic moments, in which came alongside an intrinsic magnetic moment. A ‘magneton gives of a fundamental constant, whereby the intrinsic magnetic moment of an electron acquires the circulatory current created by the angular momentum ‘p’ of an electron moving in its orbital produces a magnetic moment μ = ep/2m, where ‘e and ‘m’ are the charge and mass of the electron, by substituting the quantized relation p = jh/2π(h = the Planck constant; j = magnetic quantum number ), μ-jh/4πm. When j is taken as unity the quantity eh/4πm is called the Bohr magneton, its value is:

9.274 0780 x 10-24 Am2.

According to the wave mechanics of Dirac, the magnetic moment associated with the spin of the electron would be exactly one Bohr magnetron, although quantum electrodynamics show that a small difference can v=be expected. The nuclear magnetron, ‘μN’ is equal to (me/mp)μB. Where mp is the mass of the proton. The value of μN is:

5.050 8240 x 10-27 A m2

The magnetic moment of a proton is, in fact, 2.792 85 nuclear magnetos. The two states of different energy result from interactions between the magnetic field due to the electron’s spin and that caused by its orbital motion. These are two closely spaced states resulting from the two possible spin directions and these lead to the two limes in the doublet.

In an external magnetic field the angular momentum vector of the electron precesses. For an explicative example, if a body is of a spin, it holds about its axis of symmetry OC (where O is a fixed point) and C is rotating round an axis OZ fixed outside the body, the body is said to be precessing round OZ. OZ is the precession axis. A gyroscope precesses due to an applied torque called the precessional torque. If the moment of inertia a body about OC is I and its angular momentum velocity is ω, a torque ‘K’, whose axis is perpendicular to the axis of rotation will produce an angular velocity of precession Ω about an axis perpendicular to both ῳ and the torque axis where:

Ω = K/Iω.

It is . . . , wholly orientated of the vector to the field direction are allowed, there is a quantization so that the component of the angular momentum along the direction I restricted of certain values of h/2π. The angular momentum vector has allowed directions such that the component is mS(h2π), where mS is the magnetic so in quantum number. For a given value of s, mS has the value’s, ( s-1), . . . -s. For example, formerly the s = 1, mS is I, O, and – 1. The electron has a spin of ½ and thus mS is + ½ and – ½. Thus, the components of its spin of angular momentum along the field direction are ± ½(h/2π). These phenomena are called ‘a space quantization’.

The resultant spin of a number of particles is the vector sum of the spins (s) of the individual particles and is given by symbol S. for example, in an atom two electrons with spin of ½ could combine to give a resultant spin of S = ½ + ½ = 1 or a resultant of S = ½ – ½ =1 or a resultant of S = ½ – ½ =0.

Alternative symbols used for spin is J, for elementary particles or standard theory and I (for a nucleus). Most elementary particles have a non-zero spin, which either be integral of half integral. The spin of a nucleus is the resultant of the spin of its constituent’s nucleons.

For most generally accepted interpretations is that
ψ
2dV represents the probability that particle is located within the volume element dV, as well, ‘Ψ’ is often a complex quantity. The analogy between ‘Ψ’ and the amplitude of a wave is purely formal. There is no macroscopic physical quantity with which ‘Ψ’ can be identified, in contrast with, for example, the amplitude of an electromagnetic wave, which are expressed in terms of electric and magnetic field intensities. There are an infinite number of functions satisfying a wave equation, but only some of these will satisfy the boundary condition. ‘Ψ’ must be finite and single-valued at each point, and the spatial derivatives must be continuous at an interface? For a particle subject to a law of conservation of numbers; The integral of
Ψ
2dV over all space must remain equal to 1, since this is the probability that it exists somewhere. To satisfy this condition the wave equation must be of the first order in (dΨdt). Wave functions obtained when these conditions are applied form of set of ‘characteristic functions’ of the Schrödinger wave equation. These are often called ‘eigenfunctions’ and correspond to a set of fixed energy values in which the system may exist, called ‘eigenvalues’. Energy eigenfunctions describe stationary states of a system. For example, bound states of a system the eigenfunctions do not change signs on reversing the co-ordinated axes. These states are said to have ‘even parity’. For other states the sign changes on space reversal and the parity is said to be ‘odd’.

The least distance in a progressive wave between two surfaces with the same phase. If ‘v’ is the ‘phase speed’ and ‘v’ the frequency, the wavelength is given by v = vλ. For ‘electromagnetic radiation’ the phase speed and wavelength in a material medium are equal to their values in free space divided by the ‘refractive index’. The wavelengths are spectral lines are normally specified for free space. Optical wavelengths are measured absolutely using interferometers or diffraction grating, or comparatively using a prism spectrometer.

The wavelength can only have an exact value for an infinite wave train. If an atomic body emits a quantum in the form of a train of waves of duration ‘τ’ the fractional uncertainty of the wavelength, Δλ/λ, is approximately λ/2πcτ, where ‘c’ is the speed of free space. This is associated with the indeterminacy of the energy given by the ‘uncertainty principle’.

A moment of momentum about an axis, represented as Symbol: L, the product of the moment of inertia and angular velocity (Iѡ) angular momentum is a ‘pseudo vector quality’. It is conserved in an isolated system, as the moment of inertia contains itself of a body about an axis. The sum of the products of the mass of each particle of a body and square of its perpendicular distance from the axis: This addition is replaced by an integration in the case of continuous body. For a rigid body moving about a fixed axis, the laws of motion have the same form as those of rectilinear motion, with moments of inertia replacing mass, angular velocity replacing linear momentum, etc. hence the ‘energy’ of a body rotating about a fixed axis with angular velocity ѡ is ½Iѡ2, which corresponds to ½mv2 for the kinetic energy of a body mass ‘m’ translated with Velocity ‘v’.

The linear momentum of a particle ‘p’ bears the product of the mass and the velocity of the particle. It is a ‘vector’ quality directed through the particle of a body or a system of particles is the vector sum of the linear momentums of the individual particles. If a body of mass ‘M’ is translated (the movement of a body or system in which a way that all points are moved in parallel directions through equal distances), with a velocity ‘V’, it has its mentum as ‘MV’, which is the momentum of a particle of mass ‘M’ at the centre of gravity of the body. The product of ‘moment of inertia and angular velocity’. Angular momentum is a ‘pseudo vector quality and is conserved in an isolated system, and equal to the linear velocity divided by the radial axes per. sec.

If the moment of inertia of a body of mass ‘M’ about an axis through the centre of mass is I, the moment of inertia about a parallel axis distance ‘h’ from the first axis is I + Mh2. If the radius of gyration is ‘k’ about the first axis, it is (k2 + h2 ) about the second. The moment of inertia of a uniform solid body about an axis of symmetry is given by the product of the mass and the sum of squares of the other semi-axes, divided by 3, 4, 5 according to whether the body is rectangular, elliptical or ellipsoidal.

The circle is a special case of the ellipse. The Routh’s rule works for a circular or elliptical cylinder or elliptical discs it works for all three axes of symmetry. For example, for a circular disk of the radius ‘an’ and mass ‘M’, the moment of inertia about an axis through the centre of the disc and lying (a) perpendicular to the disc, (b) in the plane of the disc is:

(a) ¼M(a2 + a2) = ½Ma2

(b) ¼Ma2.

A formula for calculating moments of inertia I:

I = mass x (a2 /3 + n) + b2 /(3 + nʹ ),

Where n and nʹ are the numbers of principal curvatures of the surface that ends the semiaxes in question and ‘a’ and ‘b’s’ are the lengths of the semiaxes. Thus, if the body is a rectangular parallelepiped, n = nʹ = 0, and, I =-mass x (a2 / 3 + b2 /3).

If the body is a cylinder then, for an axis through its centre, perpendicular to the cylinder axis, n = 0 and nʹ = 1, it substantiates that if, I = mass x (a2 / 3 + b2 /4).

If ‘I’ is desired about the axis of the cylinder, then n= nʹ = 1 and a = b = r (the cylinder radius) and; I = mass x (r2 /2).

An array of mathematical concepts, which is similar to a determinant but differ from it in not having a numerical value in the ordinary sense of the term is called a matrix. It obeys the same rules of multiplication, addition. Etc. an array of ‘mn’ numbers set out in ‘m’ rows and ‘n’ columns are a matrix of the order of m x n. the separate numbers are usually called elements, such arrays of numbers, tarted as single entities and manipulated by the rules of matrix algebra, are of use whenever simultaneous equations are found, e.g., changing from one set of Cartesian axes to another set inclined the first: Quantum theory, electrical networks. Matrixes are very prominent in the mathematical expression of quantum mechanics.

A mathematical form of quantum mechanics that was developed by Born and Heisenberg and originally simultaneously with but independently of wave mechanics. Waving mechanics is equivalent, but in it the wave function of wave mechanics is replaced by ‘vectors’ in a seemly space (Hilbert space) and observable things of the physical world, such as energy, momentum, co-ordinates, etc., is represented by ‘matrices’.

The theory involves the idea that a maturement on a system disturbs, to some extent, the system itself. With large systems this is of no consequence, and the system this is of no classical mechanics. On the atomic scale, however, the results of the order in which the observations are made. Tote up if ‘p’ denotes an observation of a component of momentum and ‘q. An observer of the corresponding co-ordinates pq ≠ qp. Here ‘p’ and ‘q’ are not physical quantities but operators. In matrix mechanics and obey the relationship where ‘h’ is the Planck constant that equals to

6.626•076 x 10 34 j s.

pq ‒ qp = ih/2π, the matrix elements are connected with the transition probability between various states of the system.

A quantity with magnitude and direction. It can be represented by a line whose length is propositional to the magnitude and whose direction is that of the vector, or by three components in rectangular co-ordinate system. Their angle between vectors is 90%, that the product and vector product base a similarity to unit vectors such, are to either be equated to being zero or one.

A true vector, or polar vector, involves the displacement or virtual displacement. Polar vectors include velocity, acceleration, force, electric and magnetic strength. The deigns of their components are reversed on reversing the co-ordinated axes. Their dimensions include length to an odd power.

A Pseudo vector, or axial vector, involves the orientation of an axis in space. The direction is conventionally obtained in a right-handed system by sighting along the axis so that the rotation appears clockwise, Pseudo-vectors includes angular velocity, vector area and magnetic flux density. The signs of their components are unchanged on reversing the co-ordinated axes. Their dimensions include length to an even power.

Polar vectors and axial vectors obey the same laws of the vector analysis (a) Vector addition: If two vectors ‘A’ and ‘B’ are represented in magnitude and direction by the adjacent sides of a parallelogram, the diagonal represents the vector sun (A + B) in magnitude and direction, forces, velocity, etc., combine in this way. (b) Vector multiplying: There are two ways of multiplying vectors (i) the ‘scalar product’ of two vectors equals the product of their magnitudes and the co-sine of the angle between them, and is scalar quantity. It is usually written

A • B ( reads as A dot B )

(ii) The vector product of two vectors: A and B are defined as a pseudo vector of magnitude AB sin θ, having a direction perpendicular to the plane containing them. The sense of the product along this perpendicular is defined by the rule: If ‘A’ is turned toward ‘B’ through the smaller angle, this rotation appears of the vector product. A vector product is usually written:

A x B ( reads as A cross B ).

Vectors should be distinguished from scalars by printing the symbols in bold italic letters.

A theory that seeks to unite the properties of gravitational, electromagnetic, weak, and strong interactions to predict all their characteristics. At present it is not known whether such a theory can be developed, or whether the physical universe is amenable to a single analysis about the current concepts of physics. There are unsolved problems in using the framework of a relativistic quantum field theory to encompass the four elementary particles. It can occupy a certain position that using extended objects, as superstring and super-symmetric theories, may, however, still, will enable a future synthesis for achieving obtainability.

A unified quantum field theory of the electromagnetic, weak and strong interactions, in most models, the known interactions are viewed as a low-energy manifestation of a single unified interaction, the unification taking place at energies (Typically 1015 GeV) very much higher than those currently accessible in particle accelerations. One feature of the Grand Unified Theory is that ‘baryon’ number and ‘lepton’ number would no longer be absolutely conserved quantum numbers, with the consequences that such processes as ‘proton decay’, for example, the decay of a proton into a positron and a π0, and p → e+π0 would be expected to be observed. Predicted lifetimes for proton decay are very long, typically 1035 years. Searchers for proton decay are being undertaken by many groups, using large underground detectors, so far without success.

One of the mutual attractions binding the universe of its owing totality, but independent of electromagnetism, strong and weak nuclear forces of interactive bondages is one of gravitation. Newton showed that the external effect of a spherical symmetric body is the same as if the whole mass were concentrated at the centre. Astronomical bodies are roughly spherically symmetric so can be treated as point particles to a very good approximation. On this assumption Newton showed that his law consistent with Kepler’s laws? Until recently, all experiments have confirmed the accuracy of the inverse square law and the independence of the law upon the nature of the substances, but in the past few years evidence has been found against both.

The size of a gravitational field at any point is given by the force exerted on unit mass at that point. The field intensity at a distance ‘χ’ from a point mass ‘m’ is therefore Gm/χ2, and acts toward ‘m’. Gravitational field strength is measured in ‘newtons’ per kilogram. The gravitational potential ‘V’ at that point is the work done in moving a unit mass from infinity to the point against the field, due to a point mass.

x

V = Gm  ∞ dχ / χ2 = ‒ Gm / χ.

‘V’ is a scalar measurement in joules per kilogram. The following special cases are also important (a) Potential at a point distance χ from the centre of a hollow homogeneous spherical shell of mass ‘m’ and outside the shell:

V = ‒Gm / χ.

The potential is the same as if the mass of the shell is assumed concentrated at the centre (b) At any point inside the spherical shell the potential is equal to its value at the surface:

V = ‒Gm / r

Where ‘r’ is the radius of the shell. Thus, there is no resultant force acting at any point inside the shell, since no potential difference acts between any two points, then, the potential at a point distance ‘χ’ from the centre of a homogeneous solid sphere and outside the spheres the same as that for a shell:

V = ‒Gm / χ

(d) At a point inside the sphere, of radius ‘r’.

V = ‒Gm( 3r2 ‒ χ2 ) /2r3.

The essential property of gravitation is that it causes a change in motion, in particular the acceleration of free fall (g) in the earth’s gravitational field. According to the general theory of relativity, gravitational fields change the geometry of space-timer, causing it to become curved. It is this curvature that is geometrically responsible for an inseparability of the continuum of ‘space-time’ and its forbearing product is to a vicinity mass, entrapped by the universality of space-time, that in ways described by the pressures of their matter, that controls the natural motions of fording bodies. General relativity may thus be considered as a theory of gravitation, differences between it and Newtonian gravitation only appearing when the gravitational fields become very strong, as with ‘black-holes’ and ‘neutron stars’, or when very accurate measurements can be made.

Another binding characteristic embodied universally is the interaction between elementary particle arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because of the uncertainty principle being broken by an amount ~E providing this only occurring for a time is possible for the law of conservation of mass and energy such that:

ΔEΔt ≤ h/4π.

This makes it possible for particles to be created for short periods of time where their creation would normally violate conservation laws of energy. These particles are called ‘virtual particles’. For example, in a complete vacuum-that no ‘real’ particle’s exist, as pairs of virtual electrons and positron are continuously forming and rapidly disappearing (in less then 10-23 seconds). Other conservation laws such as those applying to angular momentum, Isospin, etc., cannot be violated even for short periods of time.

Because its strength lies between strong and weak nuclear interactions, the exchanging electromagnetic interaction of particles decaying by electromagnetic interaction, do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying under the influence of strong interaction. For example, of electromagnetic decay is: π0 → γ + γ. This decay process, with a mean lifetime covering 8.4 x 10-17, may be understood as the annihilation of the quark and the antiquark, making up the π0, into a pair of photons. The quantum numbers having to be conserved in electromagnetic interactions are, angular momentum, charge, baryon number, Isospin quantum number I3, strangeness, charm, parity and charge conjugation parity are unduly influenced.

Quanta’s electrodynamic descriptions of the photon-mediated electromagnetic interactions have been verified over a great range of distances and have led to highly accurate predictions. Quantum electrodynamics are a ‘gauge theory; as in quantum electrodynamics, the electromagnetic force can be derived by requiring that the equation describing the motion of a charged particle remain unchanged in the course of local symmetry operations. Specifically, if the phase of the wave function, by which charged particle is described is alterable independently, at which point in space, quantum electrodynamics require that the electromagnetic interaction and its mediating photon exist in order to maintain symmetry.

A kind of interaction between elementary particles that is weaker than the strong interaction force by a factor of about 1012. When strong interactions can occur in reactions involving elementary particles, the weak interactions are usually unobserved. However, sometimes strong and electromagnetic interactions are prevented because they would violate the conservation of some quantum number, e.g., strangeness, that has to be conserved in such reactions. When this happens, weak interactions may still occur.

The weak interaction operates over an extremely short range (about 2 x 10-18 m) it is mediated by the exchange of a very heavy particle (a gauge boson) that may be the charged W+ or W‒ particle (mass about 80 GeV/c2) or the neutral Z0 particles (mass about 91 GeV/c2). The gauge bosons that mediate the weak interactions are analogous to the photon that mediates the electromagnetic interaction. Weak interactions mediated by W particles involve a change in the charge and hence the identity of the reacting particle. The neutral Z0 does not lead to such a change in identity. Both sorts of weak interaction can violate parity.

Most of the long-lived elementary particles decay as a result of weak interactions. For example, the kaon decay K+ ➝ μ+ vμ may be thought of for being due to the annihilation of the u quark and antiquark in the K+ to produce a virtual W+ boson, which then converts into a positive muon and a neutrino. This decay action or and electromagnetic interaction because strangeness is not conserved, Beta decay is the most common example of weak interaction decay. Because it is so weak, particles that can only decay by weak interactions that do so slowly, i.e., they have a very long lifetimes. Other examples of weak interactions include the scattering of the neutrino by other particles and certain very small effects on electrons within the atom.

Understanding of weak interactions is based on the electroweak theory, in which it is proposed that the weak and electromagnetic interactions are different manifestations of a single underlying force, known as the electroweak force. Many of the predictions of the theory have been confirmed experimentally.

A gauge theory, also called quantum flavour dynamics, that provides a unified description of both the electromagnetic and weak interactions. In the Glashow-Weinberg-Salam theory, also known as the standard model, electroweak interactions arise from the exchange of photons and of massive charged W+ and neutral Z0 bosons of spin 1 between quarks and leptons. The extremely massive charged particle, symbol W+ or W‒, that mediates certain types of weak interaction. The neutral Z-particle, or Z boson, symbol Z0, mediates the other types. Both are gauge bosons. The W-and Z-particles were first detected at CERN (1983) by studying collisions between protons and antiprotons with total energy 540 GeV in centre-of -mass co-ordinates. The rest masses were determined as about 80 GeV/c2 and 91 GeV/c2 for the W-and Z-particles, respectively, as had been predicted by the electroweak theory.

The interaction strengths of the gauge bosons to quarks and leptons and the masses of the W and Z bosons themselves are predicted by the theory, the Weinberg Angle θW, which must be determined by experiment. The Glashow-Weinberg-Salam theory successfully describes all existing data from a wide variety of electroweak processes, such as neutrino-nucleon, neutrino-electron and electron-nucleon scattering. A major success of the model was the direct observation in 1983-84 of the W± and Z0 bosons with the predicted masses of 80 and 91 GeV/c2 in high energy proton-antiproton interactions. The decay modes of the W± and Z0 bosons have been studied in very high pp and e+ e‒ interactions and found to be in good agreement with the Standard model. The six known types (or flavours) of quarks and the six known leptons are grouped into three separate generations of particles as follows:

1st generation: e‒ ve u d

2nd generation: μ‒ vμ c s

3rd generation: τ‒ vτ t b

The second and third generations are essentially copies of the first generation, which contains the electron and the ‘up’ and ‘down’ quarks making up the proton and neutron, but involve particles of higher mass. Communication between the different generations occurs only in the quark sector and only for interactions involving W± bosons. Studies of Z0 bosons production in very high energy electron-positron interactions has shown that no further generations of quarks and leptons can exist in nature (an arbitrary number of generations is a priori possible within the standard model) provided only that any new neutrinos are approximately massless.

The Glashow Weinberg-Salam model also predicts the existence of a heavy spin 0 particle, not yet observed experimentally, known as the Higgs boson. The spontaneous symmetry-breaking mechanism used to generate non-zero masses for W± and Z bosons in the electroweak theory, whereby the mechanism postulates the existence of two new complex fields, φ (χμ) = φ1 + I φ2 and Ψ (χμ) = Ψ1 + I Ψ2 that are functional distributors to χμ = χ, y, z and t, and form a doublet? (φ, Ψ) this doublet of complex fields transforms in the same way as leptons and quarks under electroweak gauge transformations. Such gauge transformations rotate φ1, φ2, Ψ1, Ψ2 into each other without changing the nature of the physical science.

The vacuum does not share the symmetry of the fields

(φ, Ψ) and a spontaneous breaking of the vacuum symmetry occurs via the Higgs mechanism. Consequently, the fields φ and Ψ have non-zero values in the vacuum. A particular orientation of φ1, φ2, Ψ1, Ψ2 may be chosen so that all the components of φ ( φ1 ). This component responds to electroweak fields in a way that is analogous to the response of a plasma to electromagnetic fields. Plasmas oscillate in the presence of electromagnetic waves, however, electromagnetic waves can only propagate at a frequency above the plasma frequency ωp2 given by the expression:

ωp2 = ne2 / mε

Where ‘n’ is the charge number density, ‘e’ the electrons charge. ‘m’ the electrons mass and ‘ε’ is the Permittivity of the plasma. In quantum field theory, this minimum frequency for electromagnetic waves may be thought of as a minimum energy for the existence of a quantum of the electromagnetic field (a photon) within the plasma. This minimum energy or mass for the photon, which becomes a field quantum of a finite ranged force. Thus, in its plasma, photons acquire a mass and the electromagnetic interaction has a finite range.

The vacuum field φ1 responds to weak fields by giving a mass and finite range to the W± and Z bosons, however, the electromagnetic field is unaffected by the presence of φ1 so the photon remains massless. The mass acquired by the weak interaction bosons is proportional to the vacuum of φ1 and to the weak charge strength. A quantum of the field φ1 is an electrically neutral particle called the Higgs boson. It interacts with all massive particles with a coupling that is proportional to their mass. The standard model does not predict the mass of the Higgs boson, but it is known that it cannot be too heavy. Not much more than about 1000 proton masses. Since this would lead to complicated self-interaction, such self-interaction is not believed to be present, because the theory does not account for them, but nevertheless successfully predicts the masses of the W± and Z bosons. These of the particle results from the so-called spontaneous symmetry breaking mechanisms, and used to generate non-zero masses for the W± and Z0 bosons and is presumably too massive to have been produced in existing particle accelerators.

We now turn our attentions belonging to the third binding force of unity, in, and of itself, its name implicates a physicality in the belonging nature that holds itself the binding of strong interactions that portray of its owing universality, simply because its universal. Interactions between elementary particles involving the strong interaction force. This force is about one hundred times greater than the electromagnetic force between charged elementary particles. However, it is a short range force-it is only important for particles separated by a distance of less than abut 10-15-and is the force that holds protons and neutrons together in atomic nuclei for ‘soft’ interactions between hadrons, where small-scale transfers of momentum are involved, the strong interactions may be described in terms of the exchange of virtual hadrons, just as electromagnetic interactions between charged particles may be described in terms of the exchange of virtual photons. At a more fundamental level, the strong interaction arises as the result of the exchange of Gluons between quarks and/and antiquarks as described by quantum chromodynamics.

In the hadron exchange picture, any hadron can act as the exchanged particle provided certain quantum numbers are conserved. These quantum numbers are the total angular momentum, charge, baryon number, Isospin (both I and I3), strangeness, parity, charge conjugation parity, and G-parity. Strong interactions are investigated experimentally by observing how beams of high-energy hadrons are scattered when they collide with other hadrons. Two hadrons colliding at high energy will only remain near to each other for a very short time. However, during the collision they may come sufficiently close to each other for a strong interaction to occur by the exchanger of a virtual particle. As a result of this interaction, the two colliding particles will be deflected (scattered) from their original paths. ‘I’ the virtual hadron exchanged during the interaction carries some quantum numbers from one particle to the other, the particles found after the collision may differ from those before it. Sometimes the number of particles is increased in a collision.

In hadron-hadron interactions, the number of hadrons produced increases approximately logarithmically with the total centre of mass energy, reaching about 50 particles for proton-antiproton collisions at 900 GeV, for example in some of these collisions, two oppositely-directed collimated ‘jets’ of hadrons are produced, which are interpreted as due to an underlying interaction involving the exchange of an energetic gluon between, for example, a quark from the proton and an antiquark from the antiproton. The scattered quark and antiquark cannot exist as free particles, but instead ‘fragments’ into a large number of hadrons (mostly pions and kaon) travelling approximately along the original quark or antiquark direction. This results in collimated jets of hadrons that can be detected experimentally. Studies of this and other similar processes are in good agreement with quantum chromodynamics predictions.

The interaction between elementary particles arising as a consequence of their associated electric and magnetic fields. The electrostatic force between charged particles is an example. This force may be described in terms of the exchange of virtual photons, because its strength lies between strong and weak interactions, particles decaying by electromagnetic interaction do so with a lifetime shorter than those decaying by weak interaction, but longer than those decaying by strong interaction. An example of electromagnetic decay is: π0 ➝ ϒ + ϒ. This decay process (mean lifetime 8.4 x 10-17 seconds) may be understood as the ‘annihilation’ of the quark and the antiquark making up the π0, into a pair of photons. The following quantum numbers have to be conserved in electromagnetic interactions: Angular momentum, charm, baryon number, Isospin quantum number I3, strangeness, charm, parity, and charge conjugation parity.

A particle that, as far as is known, is not composed of other simpler particles. Elementary particles represent the most basic constituents of matter and are also the carriers of the fundamental forces between particles, namely the electromagnetic, weak, strong, and gravitational forces. The known elementary particles can be grouped into three classes, leptons, quarks, and gauge bosons, hadrons, such strongly interacting particles as the proton and neutron, which are bound states of quarks and antiquarks, are also sometimes called elementary particles.

Leptons undergo electromagnetic and weak interactions, but not strong interactions. Six leptons are known, the negatively charged electron, muon, and tauons plus three associates neutrinos: ve, vμ and vτ. The electron is a stable particle but the muon and tau leptons decay through the weak interactions with lifetimes of about 10-8 and 10-13 seconds. Neutrinos are stable neutral leptons, which interact only through the weak interaction.

Corresponding to the leptons are six quarks, namely the up (u), charm (one c) and top (t) quarks with electric charge equal to +⅔ that of the proton and the down (d), strange (s), and bottom (b) quarks of charge -⅓ the proton charge. Quarks have not been observed experimentally as free particles, but reveal their existence only indirectly in high-energy scattering experiments and through patterns observed in the properties of hadrons. They are believed to be permanently confined within hadrons, either in baryons, half integer spin hadrons containing three quarks, or in mesons, integer spin hadrons containing a quark and an antiquark. The proton, for example, is a baryon containing two ‘up’ quarks and an ‘anti-down (d) quark, while the π+ is a positively charged meson containing an up quark and an anti-down (d) antiquark. The only hadron that is stable as a free particle is the proton. The neutron is unstable when free. Within a nucleus, proton and neutrons are generally both stable but either particle may bear into a transformation into the other, by ‘Beta Decay or Capture’.

Interactions between quarks and leptons are mediated by the exchange of particles known as ‘gauge bosons’, specifically the photon for electromagnetic interactions, W± and Z0 bosons for the weak interaction, and eight massless Gluons, in the case of the strong integrations.

A class of eigenvalue problems in physics that take the form ΩΨ = λΨ,

Where ‘Ω’ is some mathematical operation (multiplication by a number, differentiation, etc.) on a function ‘Ψ’, which is called the ‘eigenfunction’. ‘λ’ is called the eigenvalue, which in a physical system will be identified with an observable quantity analogous to the amplitude of a wave that appears in the equations of wave mechanics, particularly the Schrödinger wave equation, the most generally accepted interpretation is that
Ψ
2dV, representing the probability that a particle is located within the volume element dV, mass in which case a particle of mass ‘m’ moving with a velocity ‘v’ will, under suitable experimental conditions exhibit the characteristics of a wave of wave length λ, given by the equation? λ = h/mv, where ‘h’ is the Planck constant that equals to 6.626 076 x 10-34 J s.? This equation is the basis of wave mechanics.

Eigenvalue problems are ubiquitous in classical physics and occur whenever the mathematical description of a physical system yields a series of coupled differential equations. For example, the collective motion of a large number of interacting oscillators may be described by a set of coupled differential educations. Each differential equation describes the motion of one of the oscillators in terms of the position of all the others. A ‘harmonic’ solution may be sought, in which each displacement is assumed to have a ‘simple harmonic motion’ in time. The differential equations then reduce to 3N linear equations with 3N unknowns, where ‘N’ is the number of individual oscillators, each with three degrees of freedom. The whole problem is now easily recast as a ‘matrix education’ of the form:

Mχ = ω2χ

Where ‘M’ is an N x N matrix called the ‘dynamical matrix’, and χ is an N x 1 ‘a column matrix, and ω2 is the square of an angular frequency of the harmonic solution. The problem is now an eigenvalue problem with eigenfunctions ‘χ’ which is the normal mode of the system, with corresponding eigenvalues ω2. As ‘χ’ can be expressed as a column vector, χ is a vector in some N-dimensional vector space. For this reason, χ is often called an eigenvector.

When the collection of oscillators is a complicated three-dimensional molecule, the casting of the problem into normal modes is an effective simplification of the system. The symmetry principles of ‘group theory’ can then be applied, which classify normal modes according to their ‘ω’ eigenvalues (frequencies). This kind of analysis requires an appreciation of the symmetry properties of the molecule. The sets of operations (Rotations, inversions, etc.) that leave the molecule invariant make up the ‘point group’ of that molecule. Normal modes sharing the same ‘ω’ eigenvalues are said to correspond to the ‘irreducible representations’ of the molecule’s point group. It is among these irreducible representations that one will find the infrared absorption spectrum for the vibrational normal modes of the molecule.

Eigenvalue problems play a particularly important role in quantum mechanics. In quantum mechanics, physically observable (location, momentum, energy, etc.) are represented by operations (differentiation with respect to a variable, multiplication by a variable), which act on wave functions. Wave functions differ from classical waves in that they carry no energy. For classical waves, the square modulus of its amplitude measure its energy. For a wave function, the square modulus of its amplitude (at a location χ) represent not energy but probability, i.e., the probability that a particle-a localized packet of energy will be observed if a detector is placed at that location. The wave function therefore describes the distribution of possible locations of the particle and is perceptible only after many location detection events have occurred. A measurement of position on a quantum particle may be written symbolically as:

X Ψ( χ ) = χΨ( χ )

Where Ψ (χ) is said to be an eigenvector of the location operator and ‘χ’ is the eigenvalue, which represents the location. Each Ψ(χ) represents amplitude at the location χ,
Ψ(χ)
2 is the probability that the particle will be located in an infinitesimal volume at that location. The wave function describing the distribution of all possible locations for the particle is the linear super-position of all Ψ (χ) for 0 ≤ χ ≤ ∞ that occur, its principle states that each stress is accompanied by the same strains whether it acts alone or in conjunction with others, it is true so long as the total stress does not exceed the limit of proportionality. Also, in vibrations and wave motion the principle asserts that one set of vibrations or waves are unaffected by the presence of another set. For example, two sets of ripples on water will pass through one another without mutual interactions so that, at a particular instant, the resultant disturbance at any point traversed by both sets of waves is the sum of the two component disturbances.

The eigenvalue problem in quantum mechanics therefore represents the act of measurement. Eigenvectors of an observable presentation were the possible states (Position, in the case of χ) that the quantum system can have. Stationary states of a quantum non-demolition attribute of a quantum system, such as position and momentum, are related by the Heisenberg Uncertainty Principle, which states that the product of the uncertainty of the measured value of a component of momentum (pχ) and the uncertainty in the corresponding co-ordinates of position (χ) is of the same set-order of significance as the Planck constant. Attributes related in this way are called ‘conjugate’ attributes. Thus, while an accurate measurement of position is possible, as a result of the uncertainty principle it produces a large momentum spread. Subsequent measurements of the position acquire a spread themselves, which makes the continuous monitoring of the position impossible.

The eigenvalues are the values that observables take on within these quantum states. As in classical mechanics, eigenvalue problems in quantum mechanics may take differential or matrix forms. Both forms have been shown to be equivalent. The differential form of quantum mechanics is called ‘wave mechanics’ (Schrödinger), where the operators are differential operators or multiplications by variables. Eigenfunctions in wave mechanics are wave functions corresponding to stationary wave states that satisfy some set of boundary conditions. The matrix form of quantum mechanics is often called matrix mechanics (Bohr and Heisenberg). Matrix acting on eigenvectors represents the operators.

The relationship between matrix and wave mechanics is very similar to the relationship between matrix and differential forms of eigenvalue problems in classical mechanics. The wave functions representing stationary states are really normal modes of the quantum wave. These normal modes may be thought of as vectors that span a vector space, which have a matrix representation.

Once, again, the Heisenberg uncertainty relation, or indeterminacy principle of ‘quantum mechanics’ that associate the physical properties of particles into pairs such that both together cannot be measured to within more than a certain degree of accuracy. If ‘A’ and ‘V’ form such a pair is called a conjugate pair, then: ΔAΔV > k, where ‘k’ is a constant and ΔA and ΔV’s are a variance in the experimental values for the attributes ‘A’ and ‘V’. The best-known instance of the equation relates the position and momentum of an electron: ΔpΔχ > h, where ‘h’ is the Planck constant. This is the Heisenberg uncertainty principle. Still, the usual value given for Planck’s constant is 6.6 x 10-27 ergs’ sec. Since Planck’s constant is not zero, mathematical analysis reveals the following: The ‘spread’, or uncertainty, in position times the ‘spread’, or uncertainty of momentum is greater than, or possibly equal to, the value of the constant or, or accurately, Planck’s constant divided by 2π, if we choose to know momentum exactly, then us knowing nothing about position, and vice versa.

The presence of Plank’s constant calls that we approach quantum physics a situation in which the mathematical theory does not allow precise prediction of, or exist in exact correspondences with, the physical reality. If nature did not insist on making changes or transitions in precise chunks of Planck’s quantum of action, or in multiples of these chunks, there would be no crisis. However, whether it is of our own determinacy, such that a cancerous growth in the body of an otherwise perfect knowledge of the physical world or the grounds for believing, in principle at least, in human freedom, one thing appears certain-it is an indelible feature of our understanding of nature.

In order too further explain how fundamental the quantum of action is to our present understanding of the life of nature, let us attempt to do what quantum physics says we cannot do and visualize its role in the simplest of all atoms-the hydrogen atom. It can be thought that standing at the centre of the Sky Dome at roughly where the pitcher’s mound is. Place a grain of salt on the mound, and picture a speck of dust moving furiously around the orbital’s outskirts of the Sky Dome’s fulfilling circle, around which the grain of salt remains referential of the topic. This represents, roughly, the relative size of the nucleus and the distance between electron and nucleus inside the hydrogen atom when imaged in its particle aspect.

In quantum physics, however, the hydrogen atom cannot be visualized with such macro-level analogies. The orbit of the electron is not a circle, in which a plantlike object moves, and each orbit is described in terms of a probability distribution for finding the electron in an average position corresponding to each orbit as opposed to an actual position. Without observation or measurement, the electron could be in some sense anywhere or everywhere within the probability distribution, also, the space between probability distributions is not empty, it is infused with energetic vibrations capable of manifesting itself as the befitting quanta.

The energy levels manifest at certain distances because the transition between orbits occurs in terms of precise units of Planck’s constant. If any attentive effects to comply with or measure where the particle-like aspect of the electron is, in that the existence of Planck’s constant will always prevent us from knowing precisely all the properties of that electron that we might presume to be they’re without measurement. Also, the two-split experiment, as our presence as observers and what we choose to measure or observe are inextricably linked to the results obtained. Since all complex molecules are built from simpler atoms, what is to be done, is that liken to the hydrogen atom, of which case applies generally to all material substances.

The grounds for objecting to quantum theory, the lack of a one-to-one correspondence between every element of the physical theory and the physical reality it describes, may seem justifiable and reasonable in strict scientific terms. After all, the completeness of all previous physical theories was measured against that criterion with enormous success. Since it was this success that gave physicists the reputation of being able to disclose physical reality with magnificent exactitude, perhaps a more complex quantum theory will emerge by continuing to insist on this requirement.

All indications are, however, that no future theory can circumvent quantum indeterminacy, and the success of quantum theory in co-ordinating our experience with nature is eloquent testimony to this conclusion. As Bohr realized, the fact that we live in a quantum universe in which the quantum of action is a given or an unavoidable reality requires a very different criterion for determining the completeness of physical theory. The new measure for a complete physical theory is that it unambiguously confirms our ability to co-ordinate more experience with physical reality.

If a theory does so and continues to do so, which is a distinctive feature of the case with quantum physics, then the theory must be deemed complete. Quantum physics not only works exceedingly well, it is, in these terms, the most accurate physical theory that has ever existed. When we consider that this physics allows us to predict and measure quantities like the magnetic moment of electrons to the fifteenth decimal place, we realize that accuracy perse is not the real issue. The real issue, as Bohr rightly intuited, is that this complete physical theory effectively undermines the privileged relationships in classical physics between physical theory and physical reality. Another measure of success in physical theory is also met by quantum physics-eloquence and simplicity. The quantum recipe for computing probabilities given by the wave function is straightforward and can be successfully employed by any undergraduate physics student. Take the square of the wave amplitude and compute the probability of what can be measured or observed with a certain value. Yet there is a profound difference between the recipe for calculating quantum probabilities and the recipe for calculating probabilities in classical physics.

In quantum physics, one calculates the probability of an event that can happen in alternative ways by adding the wave functions, and then taking the square of the amplitude. In the two-split experiment, for example, the electron is described by one wave function if it goes through one slit and by another wave function if it goes through the other slit. In order to compute the probability of where the electron is going to end on the screen, we add the two wave functions, compute the obsolete value of their sum, and square it. Although the recipe in classical probability theory seems similar, it is quite different. In classical physics, one would simply add the probabilities of the two alternative ways and let it go at that. That classical procedure does not work here because we are not dealing with classical atoms in quantum physics additional terms arise when the wave functions are added, and the probability is computed in a process known as the ‘superposition principle’. That the superposition principle can be illustrated with an analogy from simple mathematics. Add two numbers and then take the square of their sum, as opposed to just adding the squares of the two numbers. Obviously, (2 + 3)2 is not equal to 22 + 32. The former is 25, and the latter are 13. In the language of quantum probability theory:


Ψ1 + Ψ2
2 ≠
Ψ1
2 +
Ψ2
2

Where Ψ1 and Ψ2 are the individual wave functions on the left-hand side, the superposition principle results in extra terms that cannot be found on the right-handed side the left-hand faction of the above relation is the way a quantum physicists would compute probabilities and the right-hand side is the classical analogue. In quantum theory, the right-hand side is realized when we know, for example, which slit through which the electron went. Heisenberg was among the first to compute what would happen in an instance like this. The extra superposition terms contained in the left-hand side of the above relation would not be there, and the peculiar wave-like interference pattern would disappear. The observed pattern on the final screen would, therefore, be what one would expect if electrons were behaving like bullets, and the final probability would be the sum of the individual probabilities. However, when we know which slit the electron went through, this interaction with the system causes the interference pattern to disappear.

In order to give a full account of quantum recipes for computing probabilities, one ‘g’ has to examine what would happen in events that are compounded. Compound events are events that can be broken down into a series of steps, or events that consist of a number of things happening independently the recipe here calls for multiplying the individual wave functions, and then following the usual quantum recipe of taking the square of the amplitude.

The quantum recipe is
Ψ1 • Ψ2
2, and, in this case, it would be the same if we multiplied the individual probabilities, as one would in classical theory. Thus the recipes of computing results in quantum theory and classical physics can be totally different from quantum superposition effects are completely non-classical, and there is no mathematical justification to why the quantum recipes work. What justifies the use of quantum probability theory is the same thing that justifies the use of quantum physics-it has allowed us in countless experiments to extend our ability to co-ordinate experience with nature vastly.

The view of probability in the nineteenth century was greatly conditioned and reinforced by classical assumptions about the relationships between physical theory and physical reality. In this century, physicists developed sophisticated statistics to deal with large ensembles of particles before the actual character of these particles was understood. Classical statistics, developed primarily by James C. Maxwell and Ludwig Boltzmann, was used to account for the behaviour of a molecule in a gas and to predict the average speed of a gas molecule in terms of the temperature of the gas.

The presumption was that the statistical average were workable approximations those subsequent physical theories, or better experimental techniques, would disclose with precision and certainty. Since nothing was known about quantum systems, and since quantum indeterminacy is small when dealing with micro-level effects, this presumption was quite reasonable. We know, however, that quantum mechanical effects are present in the behaviour of gasses and that the choice to ignore them is merely a matter of convincing in getting workable or practical resulted. It is, therefore, no longer possible to assume that the statistical averages are merely higher-level approximations for a more exact description.

Perhaps the best-known defence of the classical conception of the relationship between physical theory ands physical reality is the celebrated animal introduced by the Austrian physicist Erin Schrödinger (1887-1961) in 1935, in a ‘thought experiment’ showing the strange nature of the world of quantum mechanics. The cat is thought of as locked in a box with a capsule of cyanide, which will break if a Geiger counter triggers. This will happen if an atom in a radioactive substance in the box decays, and there is a chance of 50% of such an event within an hour. Otherwise, the cat is alive. The problem is that the system is in an indeterminate state. The wave function of the entire system is a ‘superposition’ of states, fully described by the probabilities of events occurring when it is eventually measured, and therefore ‘contains equal parts of the living and dead cat’. When we look and see we will find either a breathing cat or a dead cat, but if it is only as we look that the wave packet collapses, quantum mechanic forces us to say that before we looked it was not true that the cat was dead and not true that it was alive, the thought experiment makes vivid the difficulty of conceiving of quantum indetermincies when these are translated to the familiar world of everyday objects.

The ‘electron,’ is a stable elementary particle having a negative charge, ‘e’, equal to: 1.602 189 25 x 10-19 C, and a rest mass, m0 equal to: .109 389 7 x 10-31 kg

equivalent to: 0.511 0034 MeV/c2. It has a spin of ½ and obeys Fermi-Dirac Statistics. As it does not have strong interactions, it is classified as a ‘lepton’.

The discovery of the electron was reported in 1897 by Sir J. J. Thomson, following his work on the rays from the cold cathode of a gas-discharge tube, it was soon established that particles with the same charge and mass were obtained from numerous substances by the ‘photoelectric effect’, ‘thermionic emission’ and ‘beta decay’. Thus, the electron was found to be part of all atoms, molecules, and crystals.

Free electrons are studied in a vacuum or a gas at low pressure, whereby beams are emitted from hot filaments or cold cathodes and are subject to ‘focussing’, so that the particles in which an electron beam in, for example, a cathode-ray tube, where in principal methods as (i) Electrostatic focussing, the beam is made to converge by the action of electrostatic fields between two or more electrodes at different potentials. The electrodes are commonly cylinders coaxial with the electron tube, and the whole assembly forms an electrostatic electron lens. The focussing effect is usually controlled by varying the potential of one of the electrodes, called the focussing electrode. (ii) Electromagnetic focussing, by way that the beam is made to converge by the action of a magnetic field that is produced by the passage of direct current, through a focussing coil. The latter are commonly a coil of short axial length mounted so as to surround the electron tube and to be coaxial with it.

The force FE on an electron or magnetic field of strengths is given by FE = Ee and is in the direction of the field. On moving through a potential difference V, the electron acquires a kinetic energy eV, hence obtaining beams of electrons of accurately known kinetic energy is possible. In a magnetic field of magnetic flux density ‘B’, an electron with speed ‘v’ is subject to a force, FB = Bev sin θ, where θ is the angle between ‘B’ and ‘v’. This force acts at right angles to the plane containing ‘B’ and ‘v’.

The mass of any particle increases with speed according to the theory of relativity. If an electron is accelerated from rest through 5kV, the mass is 1% greater than it is at rest. Thus, accountably, must be taken of relativity for calculations on electrons with quite moderate energies.

According to ‘wave mechanics’ a particle with momentum ‘mv’ exhibits’ diffraction and interference phenomena, similar to a wave with wavelength λ = h/mv, where ‘h’ is the Planck constant. For electrons accelerated through a few hundred volts, this gives wavelengths a preferably less than typical interatomic spacing in crystals. Hence, a crystal can act as a diffraction grating for electron beams.

Owing to the fact that electrons are associated with a wavelength λ given by λ = h/mv, where ‘h’ is the Planck constant and (mv) the momentum of the electron, a beam of electrons suffers diffraction in its passage through crystalline material, similar to that experienced by a beam of X-rays. The diffraction pattern depends on the spacing of the crystal planes, and the phenomenon can be employed to investigate the structure of surface and other films, and under suitable conditions exhibit the characteristics of a wave of the wavelength given by the equation λ = h/mv, which is the basis of wave mechanics. A set of waves that represent the behaviour, under appropriate conditions, of a particle, e.g., its diffraction by a crystal lattice, that is given the ‘de Broglie equation.’ They are sometimes regarded as waves of probability, since the square of their amplitude at a given point represents the probability of finding the particle in unit volume at that point.

The first experiment to demonstrate ‘electron diffraction’, and hence the wavelike nature of particles. A narrow pencil of electrons from a hot filament cathode was projected ‘in vacua’ onto a nickel crystal. The experiment showed the existence of a definite diffracted beam at one particular angle, which depended on the velocity of the electrons, assuming this to be the Bragg angle, stating that the structure of a crystal can be determined from a set of interference patterns found at various angles from the different crystal faces, least of mention, the wavelength of the electrons was calculated and found to be in agreement with the ‘de Broglie equation.’

At kinetic energies less than a few electro-volts, electrons undergo elastic collision with atoms and molecules, simply because of the large ratio of the masses and the conservation of momentum, only an extremely small transfer of kinetic energy occurs. Thus, the electrons are deflected but not slowed appreciatively. At higher energies collisions are inelastic. Molecules may be dissociated, and atoms and molecules may be excited or ionized. Thus it is the least energy that causes an ionization:

A ➝ A+ + e‒

Where the Ion and the electron are far enough apart for their electrostatic interaction to be negligible and no extra kinetic energy removed is that in the outermost orbit, i.e., the level strongly bound electrons. Considering removal of electrons from inner orbits is also possible, in which their binding energy is greater. As an excited particle or recombining, ions emit electromagnetic radiation mostly in the visible or ultraviolet.

For electron energies of the order of several GeV upwards, X-rays are generated. Electrons of high kinetic energy travel considerable distances through matter, leaving a trail of positive ions and free electrons. The energy is mostly lost in small increments (about 30 eV) with only an occasional major interaction causing X-ray emissions. The range increases at higher energies. The positron-the antiparticle of the electron, i.e., an elementary particle with electron mass and positive charge equal to that of the electron. According to the relativistic wave mechanics of Dirac, space contains a continuum of electrons in states of negative energy. These states are normally unobservable, but if sufficient energy can be given, an electron may be raised into a state of positive energy and suggested itself observably. The vacant state of negativity behaves as a positive particle of positive energy, which is observed as a positron.

The simultaneous formation of a positron and an electron from a photon is called ‘pair production’, and occurs when the annihilation of Gamma-rays photons with an energy of 1.02 MeV passes close to an atomic nucleus, whereby the interaction between the particle and its antiparticle disappear and photons or other elementary particles or antiparticles are so created, as accorded to energy and momentum conservation.

At low energies, an electron and a positron annihilate to produce electromagnetic radiation. Usually the particles have little kinetic energy or momentum in the laboratory system before interaction, hence the total energy of the radiation is nearly 2m0c2, where m0 is the rest mass of an electron. In nearly all cases two photons are generated. Each of 0.511 MeV, in almost exactly opposite directions to conserve momentum. Occasionally, three photons are emitted all in the same plane. Electron-positron annihilation at high energies has been extensively studied in particle accelerators. Generally, the annihilation results in the production of a quark, and an antiquark, fort example, e+ e‒ ➝ μ+ μ‒ or a charged lepton plus an antilepton (e+e‒ ➝ μ+μ‒). The quarks and antiquarks do not appear as free particles but convert into several hadrons, which can be detected experimentally. As the energy available in the electron-positron interaction increases, quarks and leptons of progressively larger rest mass can be produced. In addition, striking resonances are present, which appear as large increases in the rate at which annihilations occur at particular energies. The I / PSI particle and similar resonances containing an antiquark are produced at an energy of about 3 GeV, for example, giving rise to abundant production of charmed hadrons. Bottom (b) quark production occurs at greater energies than about 10 GeV. A resonance at an energy of about 90 GeV, due to the production of the Z0 gauge boson involved in weak interaction is currently under intensive study at the LEP and SLC e+ e‒ colliders. Colliders are the machines for increasing the kinetic energy of charged particles or ions, such as protons or electrons, by accelerating them in an electric field. A magnetic field is used to maintain the particles in the desired direction. The particle can travel in a straight, spiral, or circular paths. At present, the highest energies are obtained in the proton synchrotron.

The Super Proton Synchrotron at CERN (Geneva) accelerates protons to 450 GeV. It can also cause proton-antiproton collisions with total kinetic energy, in centre-of-mass co-ordinates of 620 GeV. In the USA the Fermi National Acceleration Laboratory proton synchrotron gives protons and antiprotons of 800 GeV, permitting collisions with total kinetic energy of 1600 GeV. The Large Electron Positron (LEP) system at CERN accelerates particles to 60 GeV.

All the aforementioned devices are designed to produce collisions between particles travelling in opposite directions. This gives effectively very much higher energies available for interaction than our possible targets. High-energy nuclear reaction occurs when the particles, either moving in a stationary target collide. The particles created in these reactions are detected by sensitive equipment close to the collision site. New particles, including the tauon, W, and Z particles and requiring enormous energies for their creation, have been detected and their properties determined.

While, still, a ‘nucleon’ and ‘anti-nucleon’ annihilating at low energy, produce about half a dozen pions, which may be neutral or charged. By definition, mesons are both hadrons and bosons, justly as the pion and kaon are mesons. Mesons have a substructure composed of a quark and an antiquark bound together by the exchange of particles known as Gluons.

The conjugate particle or antiparticle that corresponds with another particle of identical mass and spin, but has such quantum numbers as charge (Q), baryon number (B), strangeness (S), charms, and Isospin (I3) of equal magnitude but opposite signs. Examples of a particle and its antiparticle include the electron and positron, proton and antiproton, the positive and negatively charged pions, and the ‘up’ quark and ‘up’ antiquark. The antiparticle corresponding to a particle with the symbol ‘an’ is usually denoted ‘ā’. When a particle and its antiparticle are identical, as with the photon and neutral pion, this is called a ‘self-conjugate particle’.

The critical potential to excitation energy required to change am atom or molecule from one quantum state to another of higher energy, is equal to the difference in energy of the states and is usually the difference in energy between the ground state of the atom and a specified excited state. Which the state of a system, such as an atom or molecule, when it has a higher energy than its ground state.

No comments:

Post a Comment