May 31, 2010

-page 6-

Theories about elementary particles, however, require forces to be local-that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.


Each of the four forces has one or unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle-any number of force carriers can have the same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.

For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell’s work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.

Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,’ the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.

Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers-they are each the other’s additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.

In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.

Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.

A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle’s electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of anti-red (also called cyan), anti-blue (also called yellow), or anti-green (also called magenta). Quark types and colours are not linked-quarks, for example, may be red, green, or blue.

All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark’s anti-colour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colour’s red, blue, and green combine to produce white, so the baryon is white, or colourless.

The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-antired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolour attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.

Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.

The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.

In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of s contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.

While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.

All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. Yet the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.

The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.

One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.

The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other’s antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.

When the weak force affects a particle, the particle emits one of the three weak vector bosons-W+, W-, or Z0 -and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.

A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.

Physicists call their goal of an overall theory a ‘theory of everything,’ because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.

Gravitation is the weakest of the four forces on the balance, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.

Another unique feature of gravitation is its universality, and every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).

The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.

Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.

Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.

The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.

Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.

Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.

The American physicists’ Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.

The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions among the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.

One type of GUT contains a theory called Supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated Supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the Supersymmetric partner of a fermion should be a boson, and the Supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these super symmetric partners, but they believe the partners may be massive compared with known particles, and therefore require too much energy to create with current particle accelerators.

Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time-some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating Supersymmetry with string theory results in theories of superstring. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.

Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles-leptons, quarks, force-carrying bosons, and the Higgs boson-appear to be ‘point particles.’ A point particle is infinitely small, and it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.

In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.

Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth’s atmosphere. Creating these particles requires extremely high amounts of energy.

Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles’ properties.

When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).

Particle accelerators come in two basic types-linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not a flat screen) television sets and computer monitors use this method to accelerate electrons

Still, all the same, it came that on January 1, 2000, people around the world celebrated the arrival of a new millennium. Some observers noted that the Gregorian calendar, which most of the world uses, of which began in AD 1 and that the new millennium truly begins in 2001. This detail failed to stem millennial festivities, but the issue shed light on the arbitrary nature of the way human beings have measured time for . . . well. . . . several millennia.

Few people know that the fellow responsible for the dating of the year 2000 was a diminutive Christian monk who lived nearly 15 centuries ago. The Romans called him Dionysius Exiguus-literally, Dennis the Little. His stature, however, could not contain his colossal aspiration: to reorder time itself. The tiny monk's efforts paid off. His work helped establish the basis for the Gregorian calendar used today throughout the world.

Dennis the Little lived in Rome during the 6th century, a generation after the last emperor was deposed. The eternal city had collapsed into ruins: Its walls had been breached, its aqueducts were shattered, and its streets were eerily silent. A trained mathematician, Dennis spent his days at a complex now called the Vatican, writing church canons and thinking about time.

In the year that historians now know as 525, Pope John I asked Dennis to calculate the dates upon which future Easters would fall. Then, as now, this was a complicated task, given the formula adopted by the church some two centuries earlier -that Easter will fall on the first Sunday after the first full Moon following the spring equinox. Dennis carefully studied the positions of the Moon and the Sun and produced a chart of upcoming Easters, beginning in 532. A calendar beginning in the year 532 probably struck Dennis's contemporaries as strange. For them the year was either 1285, dated from the founding of Rome, or 248, based on a calendar that started with the first year of the reign of Emperor Diocletian.

Dennis approved of neither accepted date, especially not the one glorifying the reign of Diocletian, a notorious persecutor of Christians. Instead, Dennis calculated his years from the reputed birth date of Jesus Christ. Justifying his choice, Dennis wrote that he ‘preferred to count and denote the years from the incarnation of our Lord, in order to make the foundation of our hope better known. . . .’ Dennis's preference appeared on his new Easter charts, which began with anno Domini nostri Jesu Christi DXXXII (Latin for ‘in the year of our Lord Jesus Christ 532'), or AD 532.

However, Dennis got his dates wrong. Modern biblical historians believe Jesus Christ was most likely born in 4 or 5 Bc, not in the year Dennis called AD 1, although no one knows for sure. The real 2,000-year anniversary of Jesus' birth was therefore probably 1996 or 1997. Dennis pegged the birth of Christ to the year AD 1, rather than AD0, for the simple reason that Roman numerals had no zero. The mathematical concept of zero did not reach Europe until some eight centuries later. So the wee abbot started with year 1, and 2,000 years from the start of year 1 is not January 1, 2000, but January 1, 2001-a date many people find far less interesting.

These errors, however, are hardly unique in the complicated history of the Gregorian calendar, which is essentially a story of attempts, and failures, to get time right. It was not until 1949, when Communist leader Mao Zedong seized power in China, that the Gregorian calendar became the world's most widely accepted dating system. Mao ordered the changeover, believing that replacing the ancient Chinese lunar calendar with the more accurate Gregorian calendar was central to China's march toward modernity.

Mao's order completed the world conquest of a calendar that takes its name from a 16th-century pope, Gregory XIII. Gregory earned his fame by revising the calendar already modified by Dennis and first launched by Roman leader Julius Caesar in 47 BC. Caesar, in turn, borrowed his calendar from the Egyptians, who invented their calendar some 4,000 years before that. On the long road to the Gregorian calendar, fragments of many other time-measuring schemes were incorporated-from India, Sumer, Babylon, Palestine, Arabia, and pagan Europe.

Despite persistent human efforts to track the passage of time, nearly every calendar ever created has been inaccurate. One reason is that the solar year (the precise amount of time it takes the Earth to revolve once around the Sun) runs an awkward 365.252199 days-hardly an easy number to calculate without modern instruments. Another complication is the tendency of the Earth to wobble and wiggle ever so slightly in its orbit, yanked this way and that by the Moon's elliptical orbit and by the gravitational tug of the Sun. As a result, each year varies in length by a few seconds, making the exact length of any given year extraordinarily difficult to pin down.

If this sounds like splitting hairs, it is. Yet it also highlights some of the difficulties faced by astronomers, kings, priests, and other calendar makers, who tracked the seasons to know when to plant crops, collect taxes, or follow religious rituals.

The first efforts to keep a record of time probably occurred tens of thousands of years ago, when ancient humans in Europe and Africa peered up at the Moon and realized that its phases recurred in a steady, predictable fashion. A few people scratched what they saw onto rocks and bones, creating what may have been the world's first calendars. Heady stuff for skin-clad hominids, these calendars enabled them to predict when the silvery light would be available to hunt or to raid rival clans and to know how many full Moons would pass before the chill of winter gave way to spring.

The atomic grid added a second to UTC. Millennium watchers everywhere began wondering whether they should add a second to the countless clocks on buildings, in shops, and in homes that are counting down the third millennium to the very second. Most, though not all, made the change, adding another second of uncertainty to the question of when the new millennium begins.

Always the calendar invented by Caesar and Dennis the Little moves forward, rushing toward the next millennium 1,000 years from now-the progression of days, weeks, months, and years that appears to be here to

stay, despite its flaws. Other calendars have been proposed to eliminate small errors in the Gregorian calendar. Some reformers, for example, support making the unequal months uniform by updating the ancient Egyptian scheme of 12 months of 30 days each, with 5 days remaining as holidays.

During the French Revolution, the government of France adopted the

Egyptian calendar and decreed 1792 the year 1, a system that lasted until Napoleon restored the Gregorian calendar in 1806. More recently the United Nations (UN) and the Congress of the United States have reconsidered this historic alternative, calling it the World Calendar. To date, however, people seem content to use an ancient calendar designed by a Roman conqueror and an obscure abbot rather than fixing it or making it more accurate. Perhaps most of us prefer the illusion of a fixed time-line over admitting that time has meaning only because we say it does.

In whatever way or possibility, we should not be of the assumption of taking for granted, as no thoughtful conclusion should be lightly dismissed as fallacious in the study assembled through the phenomenon of consciousness. Becoming even more so, when exercising the ingenuous humanness that caution measures, that we must try to move ahead to reach forward into the positive conclusion to its topic.

Many writers, along with a few well-known ne

w-age gurus, have played fast and loosely with firm interpretations of some new but informal understanding grounded within the mental in some vague sense of cosmic consciousness. However, these new age nuances are ever so erroneously placed in the new-age section of a commercial bookstore and purchased by those interested in new-age literature, and they will be quite disappointed.

What makes our species unique is the ability to construct a virtual world in which the real world can be imaged and manipulated in abstract forms and idea. Evolution has produced hundreds of thousands of species with brains, in which tens of thousands of species with complex behavioural and learning abilities. In that respect are also many species in which sophisticated forms of group communication have evolved. For example, birds, primates, and social carnivores use extensive vocal and gestural repertoires to structure behaviour in large social groups. Although we share roughly 98 percent of our genes with our primate cousins, the course of human evolution widened the cognitive gap between us and all other species, including our cousins, into a yawning chasm.

Research in neuroscience has shown that language processing is a staggeringly complex phenomenon that places incredible demands on memory and learning. Language functions extend, for example, into all major lobes of the neocortex: Auditory opinion is associated with the temporal area; tactile information is associated with the parietal area, and attention, working memory, and planning are associated with the frontal cortex of the left or dominant hemisphere. The left prefrontal region is associated with verb and noun production tasks and in the retrieval of words representing action. Broca’s area, next to the mouth-tongue region of a motor cortex, is associated with vocalization in word formation, and Wernicke’s area, by the auditory cortex, is associated with sound analysis in the sequencing of words.

Lower brain regions, like the cerebellum, have also evolved in our species to help in language processing. Until recently, we thought the cerebellum to be exclusively involved with automatic or preprogrammed movements such as throwing a ball, jumping over a high hurdle or playing noted orchestrations as on a musical instrument. Imaging studies in neuroscience suggest, however, that the cerebellum awaken within the smoldering embers brought aflame by the sparks of awakening consciousness, to think communicatively during the spoken exchange. Mostly actuated when the psychological subject occurs in making difficult the word associations that the cerebellum plays a role in associations by providing access to automatic word sequences and by augmenting rapid shifts in attention.

The midbrain and brain stem, situated on top of the spinal cord, coordinate and articulate the numerous amounts of ideas and output systems that, to play an extreme and crucial role in the interplay through which we have adaptively adjusted and coordinated the distributable dynamic communicative functions. Vocalization has some special associations with the midbrain, which coordinates the interaction of the oral and respiratory tracks necessary to make speech sounds. Since this vocalization requires synchronous activity among oral, vocal, and respiratory muscles, these functions probably connect to a central site. This site resembles the central greyness founded around the brain. The central gray area links the reticular nuclei and brain stem motor nuclei to comprise a distributed network for sound production. While human speech is dependent on structures in the cerebral cortex, and on rapid movement of the oral and vocal muscles, this is not true for vocalisation in other mammals.

Research in neuroscience reveals that the human brain is a massively parallel system in which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchical organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules were eventually wired together on some neural circuit board.

Similarly, individual linguistic symbols are continued as given to clusters of distributed brain areas and are not in a particular area. The specific sound patterns of words may be produced in dedicated regions. All the same, the symbolic and referential relationships between words are generated through a convergence of neural codes from different and independent brain regions. The processes of words comprehension and retrieval result from combinations simpler associative processes in several separate brain regions that require input from other regions. The symbolic meaning of words, like the grammar that is essential for the construction of meaningful relationships between stings of words, is an emergent property from the complex interaction of several brain parts.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered condition for survival in a ne ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressure in this new ecological niche favoured pre-adaptive changes required for symbolic commonisation. Nevertheless, as this communication resulted in increasingly more complex behaviour evolution began to take precedence of physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Although male and female hominids favoured pair bonding and created more complex social organizations in the interests of survival, the interplay between social evolution and biological evolution changed the terms of survival radically. The enhanced ability to use symbolic communication to construct of social interaction eventually made this communication the largest determinant of survival. Since this communication was based on a symbolic vocalization that requires the evolution of neural mechanisms and processes that did not evolve in any other species, this marked the emergence of a mental realm that would increasingly appear as separate nd distinct from the external material realm.

Nonetheless, if we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the active experience of the world symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Most experts agree that our ancestries became knowledgeably articulated in the spoken exchange as based on complex grammar and syntax between two hundred thousand and some hundred thousand years ago. The mechanisms in the human brain that allowed for this great achievement clearly evolved, however, over great spans of time. In biology textbooks, the lists of prior adaptations that enhanced the ability of our ancestors to use communication normally include those that are inclining to inclinations to increase intelligence. As to approach a significant alteration of oral and auditory abilities, in that the separation or localization of functional representations is found on two sides of the brain. The evolution of some innate or hard wired grammar, however, when we look at how our ability to use language could have really evolved over the entire course of hominid evolution. The process seems more basic and more counterintuitive than we had previously imagined.

Although we share some aspects of vocalization with our primate cousins, the mechanisms of human vocalization are quite different and have evolved over great spans of time. Incremental increases in hominid brain size over the last 2.5 million years enhanced cortical control over the larynx, which originally evolved to prevent food and other particles from entering the windpipe or trachea; This eventually contributed to the use of vocal symbolization. Humans have more voluntary motor control over sound produced in the larynx than any other vocal species, and this control are associated with higher brain systems involved in skeletal muscle control as opposed to just visceral control. As a result, humans have direct cortical motor control over phonation and oral movement while chimps do not.

The larynx in modern humans is positioned in a comparatively low position to the throat and significantly increases the range and flexibility of sound production. The low position of the larynx allows greater changes in the volume to the resonant chamber formed by the mouth and pharynx and makes it easier to shift sounds to the mouth and away from the nasal cavity. Formidable conclusions are those of the sounds that comprise vowel components of speeches that become much more variable, including extremes in resonance combinations such as the ‘ee’ sound in ‘tree’ and the ‘aw’ sound in ‘flaw.’ Equally important, the repositioning of the larynx dramatically increases the ability of the mouth and tongue to modify vocal sounds. This shift in the larynx also makes it more likely that food and water passing over the larynx will enter the trachea, and this explains why humans are more inclined to experience choking. Yet this disadvantage, which could have caused the shift to e selected against, was clearly out-weighed by the advantage of being able to produce all the sounds used in modern language systems.

Some have argued that this removal of constraints on vocalization suggests that spoken language based on complex symbol systems emerged quite suddenly in modern humans only about one hundred thousand years ago. It is, however, far more likely that language use began with very primitive symbolic systems and evolved over time to increasingly complex systems. The first symbolic systems were not full-blown language systems, and they were probably not as flexible and complex as the vocal calls and gestural displays of modern primates. The first users of primitive symbolic systems probably coordinated most of their social comminations with call and display behavioural attitudes alike those of the modern ape and monkeys.

Critically important to the evolution of enhanced language skills are that behavioural adaptive adjustments that serve to precede and situate biological changes. This represents a reversal of the usual course of evolution where biological change precedes behavioural adaption. When the first hominids began to use stone tools, they probably rendered of a very haphazard fashion, by drawing on their flexible ape-like learning abilities. Still, the use of this technology over time opened a new ecological niche where selective pressures occasioned new adaptions. A tool use became more indispensable for obtaining food and organized social behaviours, mutations that enhanced the use of tools probably functioned as a principal source of selection for both bodied and brains.

The first stone choppers appear in the fossil remnant fragments remaining about 2.5 million years ago, and they appear to have been fabricated with a few sharp blows of stone on stone. If these primitive tools are reasonable, which were hand-held and probably used to cut flesh and to chip bone to expose the marrow, were created by Homo habilis - the first large-brained hominid. Stone making is obviously a skill passed on from one generation to the next by learning as opposed to a physical trait passed on genetically. After these tools became critical to survival, this introduced selection for learning abilities that did not exist for other species. Although the early tool maskers may have had brains roughly comparable to those of modern apes, they were already confronting the processes for being adapted for symbol learning.

The first symbolic representations were probably associated with social adaptations that were quite fragile, and any support that could reinforce these adaptions in the interest of survival would have been favoured by evolution. The expansion of the forebrain in Homo habilis, particularly the prefrontal cortex, was on of the core adaptations. Increased connectivity enhanced this adaption over time to brain regions involved in language processing.

Imagining why incremental improvements in symbolic representations provided a selective advantage is easy. Symbolic communication probably enhanced cooperation in the relationship of mothers to infants, allowed forgoing techniques to be more easily learned, served as the basis for better coordinating scavenging and hunting activities, and generally improved the prospect of attracting a mate. As the list of domains in which symbolic communication was introduced became longer over time, this probably resulted in new selective pressures that served to make this communication more elaborate. After more functions became dependent on this communication, those who failed in symbol learning or could only use symbols awkwardly were less likely to pass on their genes to subsequent generations.

The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-anecdotical symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to complement meaning in spoken language.

The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the essentially stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.

Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, he realized that the different chances of survival of different endowed offsprings could account for the natural evolution of species. Nature ‘selects’ those members of some spacies best adapted to the environment in which they are themselves, just as human animal breeders may select for desirable traits for their livestock, and by that control the evolution of the kind of animal they wish. In the phase of Spencer, nature guarantees the ‘survival of the fittest.’ The Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanism for genetic change, and Darwin himself remained open to the search for additional mechanisms, also reaming convinced that natural selection was at the heat of it. It was only with the later discovery of the ‘gene’ as the unit of inheritance that the syntheses known as ‘neo-Darwinism’ became the orthodox theory of evolution.

The solutions to the mysterious evolution by natural selection can shape sophisticated mechanisms are to found in the working of natural section, in that for the sake of some purpose, namely, some action, the body as a whole must evidently exist for the sake of some complex action: The process is fundamentally very simple as natural selection occurs whenever genetically influence’s variation among individual effects their survival and reproduction. If a gene codes for characteristics that result in fewer viable offspring in future generations, that gene is gradually eliminated. For instance, genetic mutation that an increase vulnerability to infection, or cause foolish risk taking or lack of interest in sex, will never become common. On the other hand, genes that cause resistance that causes infection, appropriate risk taking and success in choosing fertile mates are likely to spread in the gene pool even if they have substantial costs.

A classical example is the spread of a gene for dark wing colour in a British moth population living downward form major source of air pollution. Pale moths were conspicuous on smoke-darkened trees and easily caught by birds, while a rare mutant form of a moth whose colour closely matched that of the bark escaped the predator beaks. As the tree trucks became darkened, the mutant gene spread rapidly and largely displaced the gene for pale wing colour. That is all on that point to say is that natural selection insole no plan, no goal, and no direction - just genes increasing and decreasing in frequency depending on whether individuals with these genes have, compared with order individuals, greater of lesser reproductive success.

The simplicity of natural selection has been obscured by many misconceptions. For instance, Herbert Spencer’s nineteenth-century catch phrase ‘survival of the fittest’ is widely thought to summarize the process, but an abstractive actuality openly provides a given forwarding to several misunderstandings. First, survival is of no consequence by itself. This is why natural selection has created some organisms, such as salmon and annual plants, that reproduces only once, the die. Survival increases fitness only insofar as it increases later reproduction. Genes that increase lifetime reproduction will be selected for even if they result in a reduced longevity. Conversely, a gene that deceases total lifetime reproduction will obviously be eliminated by selection even if it increases an individual’s survival.

Further confusion arises from the ambiguous meaning of ‘fittest.’ The fittest individuals in the biological scene, is not necessarily the healthiest, stronger, or fastest. In today’s world, and many of those of the past, individuals of outstanding athletic accomplishment need not be the ones who produce the most grandchildren, a measure that should be roughly correlated with fattiness. To someone who understands natural selection, it is no surprise that the parents who are not concerned about their children;’s reproduction.

A gene or an individual cannot be called ‘fit’ in isolation but only with reference to some particular spacies in a particular environment. Even in a single environment, every gene involves compromise. Consider a gene that makes rabbits more fearful and thereby helps to keep then from the jaws of foxes. Imagine that half the rabbits in a field have this gene. Because they do more hiding and less eating, these timid rabbits might be, on average, some bitless well fed than their bolder companions. Of, a hundred downbounded in the March swamps awaiting for spring, two thirds of them starve to death while this is the fate of only one-third of the rabbits who lack the gene for fearfulness, it has been selected against. It might be nearly eliminated by a few harsh winters. Milder winters or an increased number of foxes could have the opposite effect, but it all depends on the current environment.

The version of an evolutionary ethic called ‘social Darwinism’ emphasizes the struggle for natural selection, and draws the conclusion that we should glorify the assists each struggle, usually by enhancing competitive and aggressive relations between people in society, or better societies themselves. More recently the reaction between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.

The most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Even so, it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be ‘real’ only when it is ‘observed’ phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. In that respect, no simple reason of why this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we encounter by engaging the ‘eventful horizon’ or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or ‘actualized’ in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the ‘indivisible’ whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.

The scientific implications to this extraordinary relationship between parts (in that, to know what it is like to have an experience is to know its qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be ‘proven’ in scientific terms and what can be reasonably ‘inferred’ in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet are those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally have expertise on only one side of a two-culture divide. Perhaps, more important, many potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason - the implications of the amazing new fact of nature named for by non-locality, and cannot be properly understood without some familiarity with the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, fewer resultant amounts of back-ground implications should feel free to ignore it. Yet this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions as addressed to the relinquishing clasp of closure, and unswervingly close of its circle, resolve in the equations of eternity and complete of the universe of its obtainable gains for which its unification holds all that should be.

Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.

In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may tenably be able to ‘see’ that some result’s following, or that by some description is appropriate, or our inability to describe the situation may itself have some consequential consequence. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final snip alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. On that point, no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Though experiments with and one dislike is sometimes called intuition pumps.

For overfamiliar reasons, of hypothesizing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and in that respect no deductive reason that their deliberations should take any more verbal a form than this action. It is permanently tempting to conceive of this activity as for the presence inbounded in the mind of elements of some language, or other medium that represents aspects of the world. In whatever manner, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. And such of an inner present seems unnecessary, since an intelligent outcome might arouse of some principal measure from it.

In the philosophy of mind and ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.

For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus’ dog. This animal, tracking prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that. Therefore, he went the other way. The ‘syllogism of the dog’ was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being to an exceeding degree below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog’s behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, sand Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals’ silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.

Dogs are frequently shown in pictures of philosophers, as their assiduity and fidelity are some symbols

It is, nonetheless, that Decanters’s first work, the Regulae ad Directionem Ingenii (1628/9), was never complected, yet in Holland between 1628 and 1649, Descartes first wrote, and then cautiously suppressed, Le Monde (1934), and in 1637 produced the Discours de la méthode as a preface to the treatise on mathematics and physics in which he introduced the notion of Cartesian co-ordinates. His best-known philosophical work, the Meditationes de Prima Phi losophiia (Meditations on First Philosophy), together with objections by distinguished contemporaries and replies by Descartes (The Objections and Replies), appeared in 1641. The authors of the objections are: First set, the Dutch, thgirst aet, Hobbes, fourth set. Arnauld, fifth set, Gassendi and the sixth set, Mersenne. The second edition (1642) of the Meditations included a seventh set by the Jesuit Pierre Bourdin. Descartes’s penultimate work, the Principia Pilosophiae (Principles of the Soul), published in 1644 was designed partly for use as a theological textbook. His last work was Les Passions de l´ame (The Passions of the Soul) published in 1649. When in Sweden, where he contracted pneumonia, allegedly through being required to break his normal habit of late rising in order to give lessons at 5:00 a.m. His last words are supposed to have been ‘Ça, mon âme, il faut partir’ (so, my soul, it is time to part).

All the same, Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the bassi alone of which progress is possible.

The Cartesian doubt is the method of investigating how much knowledge and its basis in reason or experience as used by Descartes in the first two Medications. It attempted to put knowledge upon secure foundation by first inviting us to suspend judgements on any proportion whose truth can be doubted, even as a bare possibility. The standards of acceptance are gradually raised as we are asked to doubt the deliverance of memory, the senses, and eve n reason, all of which are in principle capable of letting us down. This is eventually found in the celebrated ‘Cogito ergo sum’: I think, therefore I am. By locating the point of certainty in my awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated the following centuries in spite of a various counter-attack on behalf of social and public starting-points. The metaphysics associated with this priority are the Cartesian dualism, or separation of mind and matter into two different but interacting substances. Descartes rigorously and rightly to ascertain that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: A Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit.’

By dissimilarity, Descartes’s notorious denial that non-human animals are conscious is a stark illustration of dissimulation. In his conception of matter Descartes also gives preference to rational cogitation over anything from the senses. Since we can conceive of the matter of a ball of wax, surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature.

Although the structure of Descartes’s epistemology, theory of mind and theory of matter have been rejected many times, their relentless exposure of the hardest issues, their exemplary clarity and even their initial plausibility, all contrives to make him the central point of reference for modern philosophy.

The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.

It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the ‘otherness’ of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger indivisible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

The subjectivity of our mind affects our perceptions of the world that is held to be objective by natural science. Create both aspects of mind and matter as individualized forms that belong to the same underlying reality.

Our everyday experience confirms the apparent fact that there is a dual-valued world as subject and objects. We as having consciousness, as personality and as experiencing beings are the subjects, whereas for everything for which we can come up with a name or designation, seems to be the object, that which is opposed to us as a subject. Physical objects are only part of the object-world. There are also mental objects, objects of our emotions, abstract objects, religious objects etc. language objectifies our experience. Experiences per se are purely sensational experienced that do not make a distinction between object and subject. Only verbalized thought reifies the sensations by conceptualizing them and pigeonholing them into the given entities of language.

Some thinkers maintain, that subject and object are only different aspects of experience. I can experience myself as subject, and in the act of self-reflection. The fallacy of this argument is obvious: Being a subject implies having an object. We cannot experience something consciously without the mediation of understanding and mind. Our experience is already conceptualized at the time it comes into our consciousness. Our experience is negative insofar as it destroys the original pure experience. In a dialectical process of synthesis, the original pure experience becomes an object for us. The common state of our mind is only capable of apperceiving objects. Objects are reified negative experience. The same is true for the objective aspect of this theory: by objectifying myself I do not dispense with the subject, but the subject is causally and apodeictically linked to the object. As soon as I make an object of anything, I have to realize, that it is the subject, which objectifies something. It is only the subject who can do that. Without the subject there are no objects, and without objects there is no subject. This interdependence, however, is not to be understood in terms of a dualism, so that the object and the subject are really independent substances. Since the object is only created by the activity of the subject, and the subject is not a physical entity, but a mental one, we have to conclude then, that the subject-object dualism is purely mentalistic.

The Cartesian dualism posits the subject and the object as separate, independent and real substances, both of which have their ground and origin in the highest substance of God. Cartesian dualism, however, contradicts itself: The very fact, which Descartes posits the ‘I,’ that is the subject, as the only certainty, he defied materialism, and thus the concept of some ‘res extensa.’ The physical thing is only probable in its existence, whereas the mental thing is absolutely and necessarily certain. The subject is superior to the object. The object is only derived, but the subject is the original. This makes the object not only inferior in its substantive quality and in its essence, but relegates it to a level of dependence on the subject. The subject recognizes that the object is a ‘res extensa’ and this means, that the object cannot have essence or existence without the acknowledgment through the subject. The subject posits the world in the first place and the subject is posited by God. Apart from the problem of interaction between these two different substances, Cartesian dualism is not eligible for explaining and understanding the subject-object relation.

By denying Cartesian dualism and resorting to monistic theories such as extreme idealism, materialism or positivism, the problem is not resolved either. What the positivists did, was just verbalizing the subject-object relation by linguistic forms. It was no longer a metaphysical problem, but only a linguistic problem. Our language has formed this object-subject dualism. These thinkers are very superficial and shallow thinkers, because they do not see that in the very act of their analysis they inevitably think in the mind-set of subject and object. By relativizing the object and subject in terms of language and analytical philosophy, they avoid the elusive and problematical oppure of subject-object, since which has been the fundamental question in philosophy ever. Shunning these metaphysical questions is no solution. Excluding something, by reducing it to a more material and verifiable level, is not only pseudo-philosophy but actually a depreciation and decadence of the great philosophical ideas of mankind.

Therefore, we have to come to grips with idea of subject-object in a new manner. We experience this dualism as a fact in our everyday lives. Every experience is subject to this dualistic pattern. The question, however, is, whether this underlying pattern of subject-object dualism is real or only mental. Science assumes it to be real. This assumption does not prove the reality of our experience, but only that with this method science is most successful in explaining our empirical facts. Mysticism, on the other hand, believes that there is an original unity of subject and objects. To attain this unity is the goal of religion and mysticism. Man has fallen from this unity by disgrace and by sinful behaviour. Now the task of man is to get back on track again and strive toward this highest fulfilment. Again, are we not, on the conclusion made above, forced to admit, that also the mystic way of thinking is only a pattern of the mind and, as the scientists, that they have their own frame of reference and methodology to explain the supra-sensible facts most successfully?

If we assume mind to be the originator of the subject-object dualism, then we cannot confer more reality on the physical or the mental aspect, as well as we cannot deny the one in terms of the other.

The crude language of the earliest users of symbolics must have been considerably gestured and nonsymbiotic vocalizations. Their spoken language probably became reactively independent and a closed cooperative system. Only after the emergence of hominids were to use symbolic communication evolved, symbolic forms progressively took over functions served by non-vocal symbolic forms. This is reflected in modern languages. The structure of syntax in these languages often reveals its origins in pointing gestures, in the manipulation and exchange of objects, and in more primitive constructions of spatial and temporal relationships. We still use nonverbal vocalizations and gestures to compliment meaning in spoken language.

The general idea is very powerful, however, the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be ware that one is a spatial element of the world without a grasp of the spatial nature of the world. Face to face, the idea of a perceivable, objective spatial world that causes ideas too subjectively becoming to denote in the wold. During which time, his perceptions as they have of changing position within the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere, and where he is given by what he can perceive.

Research, however distant, are those that neuroscience reveals in that the human brain is a massive parallel system which language processing is widely distributed. Computers generated images of human brains engaged in language processing reveals a hierarchal organization consisting of complicated clusters of brain areas that process different component functions in controlled time sequences. Language processing is clearly not accomplished by stand-alone or unitary modules that evolved with the addition of separate modules that were eventually wired together on some neutral circuit board.

While the brain that evolved this capacity was obviously a product of Darwinian evolution, the most critical precondition for the evolution of this brain cannot be simply explained in these terms. Darwinian evolution can explain why the creation of stone tools altered conditions for survival in a new ecological niche in which group living, pair bonding, and more complex social structures were critical to survival. Darwinian evolution can also explain why selective pressures in this new ecological niche favoured pre-adaptive changes required for symbolic communication. All the same, this communication resulted directly through its passing an increasingly atypically structural complex and intensively condensed behaviour. Social evolution began to take precedence over physical evolution in the sense that mutations resulting in enhanced social behaviour became selectively advantageously within the context of the social behaviour of hominids.

Because this communication was based on symbolic vocalization that required the evolution of neural mechanisms and processes that did not evolve in any other species. As this marked the emergence of a mental realm that would increasingly appear as separate and distinct from the external material realm.

If the emergent reality in this mental realm cannot be reduced to, or entirely explained as for, the sum of its parts, concluding that this reality is greater than the sum of its parts seems reasonable. For example, a complete proceeding of the manner in which light in particular wave lengths has ben advancing by the human brain to generate a particular colour says nothing about the experience of colour. In other words, a complete scientific description of all the mechanisms involved in processing the colour blue does not correspond with the colour blue as perceived in human consciousness. No scientific description of the physical substrate of a thought or feeling, no matter how accomplish it can but be accounted for in actualized experience, especially of a thought or feeling, as an emergent aspect of global brain function.

If we could, for example, define all of the neural mechanisms involved in generating a particular word symbol, this would reveal nothing about the experience of the word symbol as an idea in human consciousness. Conversely, the experience of the word symbol as an idea would reveal nothing about the neuronal processes involved. While one mode of understanding the situation necessarily displaces the other, both are required to achieve a complete understanding of the situation.

Even if we are to include two aspects of biological reality, finding to a more complex order in biological reality is associated with the emergence of new wholes that are greater than the orbital parts. Yet, the entire biosphere is of a whole that displays self-regulating behaviour that is greater than the sum of its parts. The emergence of a symbolic universe based on a complex language system could be viewed as another stage in the evolution of more complicated and complex systems. As marked and noted by the appearance of a new profound complementarity in relationships between parts and wholes. This does not allow us to assume that human consciousness was in any sense preordained or predestined by natural process. Thus far it does make it possible, in philosophical terms at least, to argue that this consciousness is an emergent aspect of the self-organizing properties of biological life.

If we also concede that an indivisible whole contains, by definition, no separate parts and that a phenomenon can be assumed to be ‘real’ only when it is ‘observed’ phenomenon, we are led to more interesting conclusions. The indivisible whole whose existence is inferred in the results of the aspectual experiments that cannot in principle is itself the subject of scientific investigation. There is a simple reason that this is the case. Science can claim knowledge of physical reality only when the predictions of a physical theory are validated by experiment. Since the indivisible whole cannot be measured or observed, we confront as the ‘event horizon’ or knowledge where science can say nothing about the actual character of this reality. Why this is so, is a property of the entire universe, then we must also conclude that an undivided wholeness exists on the most primary and basic level in all aspects of physical reality. What we are dealing within science per se, however, are manifestations of tis reality, which are invoked or ‘actualized’ in making acts of observation or measurement. Since the reality that exists between the space-like separated regions is a whole whose existence can only be inferred in experience. As opposed to proven experiment, the correlations between the particles, and the sum of these parts, do not constitute the ‘indivisible’ whole. Physical theory allows us to understand why the correlations occur. Nevertheless, it cannot in principle disclose or describe the actualized character of the indivisible whole.

The scientific implications to this extraordinary relationship between parts (qualia) and indivisible whole (the universe) are quite staggering. Our primary concern, however, is a new view of the relationship between mind and world that carries even larger implications in human terms. When factors into our understanding of the relationship between parts and wholes in physics and biology, then mind, or human consciousness, must be viewed as an emergent phenomenon in a seamlessly interconnected whole called the cosmos.

All that is required to embrace the alternative view of the relationship between mind and world that are consistent with our most advanced scientific knowledge is a commitment to metaphysical and epistemological realism and a willingness to follow arguments to their logical conclusions. Metaphysical realism assumes that physical reality or has an actual existence independent of human observers or any act of observation, epistemological realism assumes that progress in science requires strict adherence to scientific mythology, or to the rules and procedures for doing science. If one can accept these assumptions, most of the conclusions drawn should appear self-evident in logical and philosophical terms. Attributing any extra-scientific properties to the whole to understand is also not necessary and embrace the new relationship between part and whole and the alternative view of human consciousness that is consistent with this relationship. This is, in this that our distinguishing character between what can be ‘proven’ in scientific terms and what can be reasonably ‘inferred’ in philosophical terms based on the scientific evidence.

Moreover, advances in scientific knowledge rapidly became the basis for the creation of a host of new technologies. Yet those responsible for evaluating the benefits and risks associated with the use of these technologies, much less their potential impact on human needs and values, normally had expertise on only one side of a two-culture divide. Perhaps, more important, many of the potential threats to the human future - such as, to, environmental pollution, arms development, overpopulation, and spread of infectious diseases, poverty, and starvation - can be effectively solved only by integrating scientific knowledge with knowledge from the social sciences and humanities. We have not done so for a simple reason, the implications of the amazing new fact of nature sustaining the non-locality that cannot be properly understood without some familiarity wit the actual history of scientific thought. The intent is to suggest that what is most important about this back-ground can be understood in its absence. Those who do not wish to struggle with the small and perhaps, the fewer amounts of back-ground implications should feel free to ignore it. However, this material will be no more challenging as such, that the hope is that from those of which will find a common ground for understanding and that will meet again on this commonly functions in an effort to close the circle, resolves the equations of eternity and complete the universe to obtainably gain in its unification of which that holds within.

Another aspect of the evolution of a brain that allowed us to construct symbolic universes based on complex language system that is particularly relevant for our purposes concerns consciousness of self. Consciousness of self as an independent agency or actor is predicted on a fundamental distinction or dichotomy between this self and the other selves. Self, as it is constructed in human subjective reality, is perceived as having an independent existence and a self-referential character in a mental realm separately distinct from the material realm. It was, the assumed separation between these realms that led Descartes to posit his famous dualism in understanding the nature of consciousness in the mechanistic classical universe.

In a thought experiment, instead of bringing a course of events, as in a normal experiment, we are invited to imagine one. We may then be able to ‘see’ that some result following, or tat some description is appropriate, or our inability to describe the situation may itself have some consequences. Thought experiments played a major role in the development of physics: For example, Galileo probably never dropped two balls of unequal weight from the leaning Tower of Pisa, in order to refute the Aristotelean view that a heavy body falls faster than a lighter one. He merely asked used to imagine a heavy body made into the shape of a dumbbell, and then connecting rod gradually thinner, until it is finally severed. The thing is one heavy body until the last moment and he n two light ones, but it is incredible that this final outline alters the velocity dramatically. Other famous examples include the Einstein-Podolsky-Rosen thought experiment. In the philosophy of personal identity, our apparent capacity to imagine ourselves surviving drastic changes of body, brain, and mind is a permanent source of difficulty. There is no consensus on the legitimate place of thought experiments, to substitute either for real experiment, or as a reliable device for discerning possibilities. Thought experiments are alike of one that dislikes and are sometimes called intuition pumps.

For familiar reasons, supposing that people are characterized by their rationality is common, and the most evident display of our rationality is our capacity to think. This is the rehearsal in the mind of what to say, or what to do. Not all thinking is verbal, since chess players, composers and painters all think, and there is no a priori reason that their deliberations should take any more verbal a form than this actions. It is permanently tempting to conceive of this activity in terms of the presence in the mind of elements of some language, or other medium that represents aspects of the world. Still, the model has been attacked, notably by Wittgenstein, as insufficient, since no such presence could carry a guarantee that the right use would be made of it. Such an inner present seems unnecessary, since an intelligent outcome might arise in principle weigh out it.

In the philosophy of mind as well as ethics the treatment of animals exposes major problems if other animals differ from human beings, how is the difference to be characterized: Do animals think and reason, or have thoughts and beliefs? In philosophers as different as Aristotle and Kant the possession of reason separates humans from animals, and alone allows entry to the moral community.

For Descartes, animals are mere machines and ee lack consciousness or feelings. In the ancient world the rationality of animals is defended with the example of Chrysippus’ dog. This animal, tracking a prey, comes to a cross-roads with three exits, and without pausing to pick-up the scent, reasoning, according to Sextus Empiricus. The animal went either by this road, or by this road, or by that, or by the other. However, it did not go by this or that, but he went the other way. The ‘syllogism of the dog’ was discussed by many writers, since in Stoic cosmology animals should occupy a place on the great chain of being somewhat below human beings, the only terrestrial rational agents: Philo Judaeus wrote a dialogue attempting to show again Alexander of Aphrodisias that the dog’s behaviour does no t exhibit rationality, but simply shows it following the scent, by way of response Alexander has the animal jump down a shaft (where the scent would not have lingered). Plutah sides with Philo, Aquinas discusses the dog and scholastic thought in general was quite favourable to brute intelligence (being made to stand trail for various offences in medieval times was common for animals). In the modern era Montaigne uses the dog to remind us of the frailties of human reason: Rorarious undertook to show not only that beasts are rational, but that they make better use of reason than people do. James the first of England defends the syllogising dog, and Henry More and Gassendi both takes issue with Descartes on that matter. Hume is an outspoken defender of animal cognition, but with their use of the view that language is the essential manifestation of mentality, animals’ silence began to count heavily against them, and they are completely denied thoughts by, for instance Davidson.

The term instinct (Lat., instinctus, impulse or urge) implies innately determined behaviour, flexible to change in circumstance outside the control of deliberation and reason. The view that animals accomplish even complex tasks not by reason was common to Aristotle and the Stoics, and the inflexibility of their outline was used in defence of this position as early as Avicennia. A continuity between animal and human reason was proposed by Hume, and followed by sensationalist such as the naturalist Erasmus Darwin (1731-1802). The theory of evolution prompted various views of the emergence of stereotypical behaviour, and the idea that innate determinants of behaviour are fostered by specific environments is a guiding principle of ethology. In this sense that being social may be instinctive in human beings, and for that matter too reasoned on what we now know about the evolution of human language abilities, however, our real or actualized self is clearly not imprisoned in our minds.

It is implicitly a part of the larger whole of biological life, human observers its existence from embedded relations to this whole, and constructs its reality as based on evolved mechanisms that exist in all human brains. This suggests that any sense of the ‘otherness’ of self and world be is an illusion, in that disguises of its own actualization are to find all its relations between the part that are of their own characterization. Its self as related to the temporality of being whole is that of a biological reality. It can be viewed, of course, that a proper definition of this whole must not include the evolution of the larger undissectible whole. Yet, the cosmos and unbroken evolution of all life, by that of the first self-replication molecule that was the ancestor of DNA. It should include the complex interactions that have proven that among all the parts in biological reality that any resultant of emerging is self-regulating. This, of course, is responsible to properties owing to the whole of what might be to sustain the existence of the parts.

Founded on complications and complex coordinate systems in ordinary language may be conditioned as to establish some developments have been descriptively made by its physical reality and metaphysical concerns. That is, that it is in the history of mathematics and that the exchanges between the mega-narratives and frame tales of religion and science were critical factors in the minds of those who contributed. The first scientific revolution of the seventeenth century, allowed scientists to better them in the understudy of how the classical paradigm in physical reality has marked results in the stark Cartesian division between mind and world that became one of the most characteristic features of Western thought. This is not, however, another strident and ill-mannered diatribe against our misunderstandings, but drawn upon equivalent self realization and undivided wholeness or predicted characterlogic principles of physical reality and the epistemological foundations of physical theory.

Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. However, it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.

The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view ‘essences’ underlying and unifying physical reality as if they were ‘substances.’

Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. ‘Physical laws,’ wrote Kepler, ‘lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God’s, at least as far as we can understand something of it in this mortal life.’

The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into ‘customary points of view and forms of perception.’ The framers of classical physics derived, like the rest of us there, ‘customary points of view and forms of perception’ from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.

A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: ‘We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.’

It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.

Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of ‘consciousness’ and remains in a certain state of our study.

But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing ‘I’ in the stark Cartesian division between mind and world that some have rather aptly described as ‘the disease of the Western mind.’ In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.

Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term ‘philosophy’ in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By ‘metaphysics’ he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.

Following these fundamentals’ explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.

A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s ‘Principia Mathematica’ in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.

The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps the most central feature of Western intellectual life.

As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.

In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.

The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which ‘solved the riddle of the universe,’ but only to replace it by another riddle: The riddle of itself. Yet, we discover the ‘certain principles of physical reality,’ said Descartes, ‘not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.’ Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.

Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally ‘revealed’ truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the ‘hidden ontology of classical epistemology.’ Descartes lingers in the widespread conviction that science does not provide a ‘place for man’ or for all that we know as distinctly human in subjective reality.

The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.

A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.

Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.

Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.

James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).

Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.

The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.

It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on ‘the secure path of a science.’ The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.

Although the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.

The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.

To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.

It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.

When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.

Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.

The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the ‘self contents’ immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.

This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy’) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, q)(p & p ➝ q ➝ q)’ where there is no use of a notion of true statements. It is supposed in classical (two-valued) logic that each statement has one of these values, and not as both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true, if this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations of vagueness may introduce greys into this black-and-white schemes. For the issue of whether falsity is the only way of failing to be true. The view, if a language is provided with a truth definition, according to the semantic theory of th truth is a sufficiently characterization of its concept of truth, there is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to that of the disquotational theory

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’: When not-p.

It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:

Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)

So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.

The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since ‘agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be ‘I am facing food now.’ On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.

For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any ‘sense’ of ‘I’ or ‘now,’ to fix the reference of my subjective belies: Causal contiguity fixes them for me.

Causal contiguity, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers has called the subpersonal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual components of subjective beliefs are the believer and the time.

The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.

The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature’: ‘natura non facit saltum, nature makes no leaps.’ Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom. However, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.

Others attending to the functionalist point of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically cayuses them, what affects they have on other mental states and what affects they have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to maske the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware ee or ‘realization’ of the program the machine is running. The principal advantages of functionalism include its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be variably realized in causal architectures, just as much as they can be in different neurophysiological stares.

Nevertheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.

Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to themselves and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain ‘I’-thoughts.

Nonetheless, both subject and object, either mind or matter, are real or both are unreal, imaginary. The assumption of just an illusory subject or illusory object leads to dead-ends and to absurdities. This would entail an extreme form of skepticism, wherein everything is relative or subjective and nothing could be known for sure. This is not only devastating for the human mind, but also most ludicrous.

Does this leave us with the only option, that both, subject and objects are alike real? That would again create a real dualism, which we realized, is only created in our mind. So, what part of this dualism is not real?

To answer this, we have first to inquire into the meaning of the term ‘real.’ Reality comes from the Latin word ‘realitas,’ which could be literally translated by ‘thing-hood.’ ‘Res’ does not only have the meaning of a material thing.’ ‘Res’ can have a lot of different meanings in Latin. Most of them have little to do with materiality, e.g., affairs, events, business, a coherent collection of any kind, situation, etc. These so-called simulative terms are always subjective, and therefore related to the way of thinking and feeling of human beings. Outside of the realm of human beings, reality has no meaning at all. Only in the context of conscious and rational beings does reality become something meaningful. Reality is the whole of the human affairs insofar as these are related to our world around us. Reality is never the bare physical world, without the human being. Reality is the totality of human experience and thought in relation to an objective world.

Now this is the next aspect we have to analyse. Is this objective world, which we encounter in our experience and thought, something that exists on its own or is it dependent on our subjectivity? That the subjective mode of our consciousness affects the perceptions of the objective world is conceded by most of the scientists. Nevertheless, they assume a real and objective world, that would even exist without a human being alive or observing it. One way to handle this problem is the Kantian solution of the ‘thing-in-itself,’ that is inaccessible to our mind because of mind's inherent limitations. This does not help us very much, but just posits some undefinable entity outside of our experience and understanding. Hegel, on the other side, denied the inaccessibility of the ‘thing-in-itself’ and thought, that knowledge of the world as it is in itself is attainable, but only by ‘absolute knowing’ the highest form of consciousness.

One of the most persuasive proofs of an independent objective world, is the following thesis by science: If we put a camera into a landscape, where no human beings are present, and when we leave this place and let the camera take some pictures automatically through a timer, and when we come back some days later to develop the pictures, we will find the same picture of the landscape as if we had taken the picture ourselves. Also, common-sense tells us: if we wake up in the morning, it is highly probable, even sure, that we find ourselves in the same environment, without changes, without things having left their places uncaused.

Is this empirical argument sufficient to persuade even the most sceptical thinker, which there is an objective world out there? Hardly. If a sceptic nonetheless tries to uphold the position of a solipsistic monism, then the above-mentioned argument would only be valid, if the objects out there were assumed to be subjective mental constructs. Not even Berkeley assumed such an extreme position. His immaterialism was based on the presumption, that the world around us is the object of God's mind, that means, that all the objects are ideas in a universal mind. This is more persuasive. We could even close the gap between the religious concept of ‘God’ and the philosophical concept by relating both of them to the modern quantum physical concept of a vacuum. All have one thing in common: there must be an underlying reality, which contains and produces all the objects. This idea of an underlying reality is interestingly enough a continuous line of thought throughout the history of mankind. Almost every great philosopher or every great religion assumed some kind of supreme reality. I deal with this idea in my historical account of mind's development.

We're still stuck with the problem of subject and object. If we assume, that there may be an underlying reality, neither physical nor mental, neither object nor subject, but producing both aspects, we end up with the identity of subject and object. So long as there is only this universal ‘vacuum,’ nothing is yet differentiated. Everything is one and the same. By a dialectical process of division or by random fluctuations of the vacuum, elementary forms are created, which develop into more complex forms and finally into living beings with both a mental and a physical aspect. The only question to answer is, how these two aspects were produced and developed. Maybe there are an infinite numbers of aspects, but only two are visible to us, such as Spinoza postulated it. Also, since the mind does not evolve out of matter, there must have been either a concomitant evolution of mind and matter or matter has evolved whereas mind has not. Consequently mind is valued somehow superiorly to matter. Since both are aspects of one reality, both are alike significant. Science conceives the whole physical world and the human beings to have evolved gradually from an original vacuum state of the universe (singularity). So, has mind just popped into the world at some time in the past, or has mind emerged from the complexity of matter? The latter are not sustainable, and this leaves us with the possibility, that the other aspect, mind, has different attributes and qualities. This could be proven empirically. We do not believe, that our personality is something material, that our emotions, our love and fear are of a physical nature. The qualia and properties of consciousness are completely different from the properties of matter as science has defined it. By the very nature and essence of each aspect, we can assume therefore a different dialectical movement. Whereas matter is by the very nature of its properties bound to evolve gradually and existing in a perpetual movement and change, mind, on the other hand, by the very nature of its own properties, is bound to a different evolution and existence. Mind as such has not evolved. The individualized form of mind in the human body, that is, the subject, can change, although in different ways than matter changes. Both aspects have their own sets of laws and patterns. Since mind is also non-local, it comprises all individual minds. Actually, there is only one consciousness, which is only artificially split into individual minds. That's because of the connection with brain-organs, which are the means of manifestation and expression for consciousness. Both aspects are interdependent and constitute the world and the beings as we know them.

Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometry and numerical relationships. We imagine that the seeds of the scientific imagination were

planted in ancient Greece. This, of course, opposes any other option but to speculate some displacement afar from the Chinese or Babylonian cultures. Partly because the social, political, and economic climates in Greece were more open in the pursuit of knowledge along with greater margins that reflect upon cultural accessibility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigations. But it was only after this inheritance from Greek philosophy was wedded to some essential feature of Judeo-Christian beliefs about the origin of the cosmos that the paradigm for classical physics emerged.

The Greek philosophers we now recognized as the originator’s scientific thoughts were oraclically mystic who probably perceived their world as replete with spiritual agencies and forces. The Greek religious heritage made it possible for these thinkers to attempt to coordinate diverse physical events within a framework of immaterial and unifying ideas. The fundamental assumption that there is a pervasive, underlying substance out of which everything emerges and into which everything returns are attributed to Thales of Miletos. Thales had apparently transcended to this conclusion out of the belief that the world was full of gods, and his unifying substance, water, was similarly charged with spiritual presence. Religion in this instance served the interests of science because it allowed the Greek philosophers to view ‘essences’ underlying and unifying physical reality as if they were ‘substances.’

Nonetheless, the belief that the mind of God as the Divine Architect permeates the workings of nature. All of which, is the principle of scientific thought, as pronounced through Johannes Kepler, and subsequently to most contemporaneous physicists, as the consigned probability can feel of some discomfort, that in reading Kepler’s original manuscripts. Physics and metaphysics, astronomy and astrology, geometry and theology commingle with an intensity that might offend those who practice science in the modern sense of that word. ‘Physical laws,’ wrote Kepler, ‘lie within the power of understanding of the human mind, God wanted us to perceive them when he created us in His image so that we may take part in His own thoughts . . . Our knowledge of numbers and quantities are the same as that of God’s, at least as far as we can understand something of it in this mortal life.’

The history of science grandly testifies to the manner in which scientific objectivity results in physical theories that must be assimilated into ‘customary points of view and forms of perception.’ The framers of classical physics derived, like the rest of us there, ‘customary points of view and forms of perception’ from macro-level visualized experience. Thus, the descriptive apparatus of visualizable experience became reflected in the classical descriptive categories.

A major discontinuity appears, however, as we moved from descriptive apparatus dominated by the character of our visualizable experience to a complete description of physical reality in relativistic and quantum physics. The actual character of physical reality in modern physics lies largely outside the range of visualizable experience. Einstein, was acutely aware of this discontinuity: ‘We have forgotten what features of the world of experience caused us to frame pre-scientific concepts, and we have great difficulty in representing the world of experience to ourselves without the spectacles of the old-established conceptual interpretation. There is the further difficulty that our language is compelled to work with words that are inseparably connected with those primitive concepts.’

It is time, for the religious imagination and the religious experience to engage the complementary truths of science in filling that which is silence with meaning. However, this does not mean that those who do not believe in the existence of God or Being should refrain in any sense for assessing the implications of the new truths of science. Understanding these implications does not require to some ontology, and is in no way diminished by the lack of ontology. And one is free to recognize a basis for an exchange between science and religion since one is free to deny that this basis exists - there is nothing in our current scientific world-view that can prove the existence of God or Being and nothing that legitimate any anthropomorphic conceptions of the nature of God or Being. The question of belief in onology remains what it has always been - a question, and the physical universe on the most basic level remains what has always been - a riddle. And the ultimate answer to the question and the ultimate meaning of the riddle are, and probably will always be, a mater of personal choice and conviction.

Our frame reference work is mostly to incorporate in an abounding set-class affiliation between mind and world, by that lay to some defining features and fundamental preoccupations, for which there is certainly nothing new in the suggestion that contemporary scientific world-view legitimates an alternate conception of the relationship between mind and world. The essential point of attention is that one of ‘consciousness’ and remains in a certain state of our study.

But at the end of this, sometimes labourious journey that precipitate to some conclusion that should make the trip very worthwhile. Initiatory comments offer resistance in contemporaneous physics or biology for believing ‘I’ in the stark Cartesian division between mind and world that some have rather aptly described as ‘the disease of the Western mind.’ In addition, let us consider the legacy in Western intellectual life of the stark division between mind and world sanctioned by René Descartes.

Descartes, the father of modern philosophy, inasmuch as he made epistemological questions the primary and central questions of the discipline. But this is misleading for several reasons. In the first, Descartes conception of philosophy was very different from our own. The term ‘philosophy’ in the seventeenth century was far more comprehensive than it is today, and embraced the whole of what we nowadays call natural science, including cosmology and physics, and subjects like anatomy, optics and medicine. Descartes reputation as a philosopher in his own time was based as much as anything on his contributions in these scientific areas. Secondly, even in those Cartesian writings that are philosophical in the modern academic sense, the e epistemological concerns are rather different from the conceptual and linguistic inquiries that characterize present-day theory of knowledge. Descartes saw the need to base his scientific system on secure metaphysical foundations: By ‘metaphysics’ he meant that in the queries into God and the soul and usually all the first things to be discovered by philosophizing. Yet, he was quick to realize that there was nothing in this view that provided untold benefits between heaven and earth and united the universe in a shared and communicable frame of knowledge, it presented us with a view of physical reality that was totally alien from the world of everyday life. Even so, there was nothing in this view of nature that could explain or provide a foundation for the mental, or for all that of direct experience as distinctly human, with no ups, downs or any which ways of direction.

Following these fundamentals’ explorations that include questions about knowledge and certainty, but even here, Descartes is not primarily concerned with the criteria for knowledge claims, or with definitions of the epistemic concepts involved, as his aim is to provide a unified framework for understanding the universe. And with this, Descartes was convinced that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invented algebraic geometry.

A scientific understanding to these ideas could be derived, as did that Descartes declared, that with the aid of precise deduction, and he also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Isaac Newton’s ‘Principia Mathematica’ in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and principle of scientific knowledge.

The radical separation between mind and nature formalized by Descartes served over time to allow scientists to concentrate on developing mathematical descriptions of matter as pure mechanisms lacking any concerns about its spiritual dimension or ontological foundations. Meanwhile, attempts to rationalize, reconcile, or eliminate Descartes’s stark division between mind and matter became perhaps the most central feature of Western intellectual life.

As in the view of the relationship between mind and world sanctioned by classical physics and formalized by Descartes became a central preoccupation in Western intellectual life. And the tragedy of the Western mind is that we have lived since the seventeenth century with the prospect that the inner world of human consciousness and the outer world of physical reality are separated by an abyss or a void that cannot be bridged or to agree with reconciliation.

In classical physics, external reality consisted of inert and inanimate matter moving according to wholly deterministic natural laws, and collections of discrete atomized parts made up wholes. Classical physics was also premised, however, a dualistic conception of reality as consisting of abstract disembodied ideas existing in a domain separate form and superior to sensible objects and movements. The notion that the material world experienced by the senses was inferior to the immaterial world experienced by mind or spirit has been blamed for frustrating the progress of physics up too at least the time of Galileo. But in one very important respect, it also made the first scientific revolution possible. Copernicus, Galileo, Kepler, and Newton firmly believed that the immaterial geometrical and mathematical ideas that inform physical reality had a prior existence in the mind of God and that doing physics was a form of communion with these ideas.

The tragedy of the Western mind is a direct consequence of the stark Cartesian division between mind and world. This is the tragedy of the modern mind which ‘solved the riddle of the universe,’ but only to replace it by another riddle: The riddle of itself. Yet, we discover the ‘certain principles of physical reality,’ said Descartes, ‘not by the prejudices of the senses, but by rational analysis, which thus possess so great evidence that we cannot doubt of their truth.’ Since the real, or that which actually remains external to ourselves, was in his view only that which could be represented in the quantitative terms of mathematics, Descartes concluded that all qualitative aspects of reality could be traced to the deceitfulness of the senses.

Given that Descartes distrusted the information from the senses to the point of doubting the perceived results of repeatable scientific experiments, how did he conclude that our knowledge of the mathematical ideas residing only in mind or in human subjectivity was accurate, much less the absolute truth? He did so by making a leap of faith - God constructed the world, said Descartes, according to the mathematical ideas that our minds could uncover in their pristine essence. The truths of classical physics as Descartes viewed them were quite literally ‘revealed’ truths, and it was this seventeenth-century metaphysical presupposition that became in the history of science what is termed the ‘hidden ontology of classical epistemology.’ Descartes lingers in the widespread conviction that science does not provide a ‘place for man’ or for all that we know as distinctly human in subjective reality.

The historical notion in the unity of consciousness has had an interesting history in philosophy and psychology. Taking Descartes to be the first major philosopher of the modern period, the unity of consciousness was central to the study of the mind for the whole of the modern period until the 20th century. The notion figured centrally in the work of Descartes, Leibniz, Hume, Reid, Kant, Brennan, James, and, in most of the major precursors of contemporary philosophy of mind and cognitive psychology. It played a particularly important role in Kant's work.

A couple of examples will illustrate the role that the notion of the unity of consciousness played in this long literature. Consider a classical argument for dualism (the view that the mind is not the body, indeed is not made out of matter at all). It starts like this: When I consider the mind, which is to say of myself, as far as I am only a thinking thing, I cannot distinguish in myself any parts, but apprehend myself to be clearly one and entire.

Descartes then asserts that if the mind is not made up of parts, it cannot consist of matter, presumably because, as he saw it, anything material has parts. He then goes on to say that this would be enough to prove dualism by itself, had he not already proved it elsewhere. It is in the unified consciousness that I have of myself.

Here is another, more elaborate argument based on unified consciousness. The conclusion will be that any system of components could never achieve unified consciousness acting in concert. William James' well-known version of the argument starts as follows: Take a sentence of a dozen words, take twelve men, and to each word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; Nowhere will there be a consciousness of the whole sentence.

James generalizes this observation to all conscious states. To get dualism out of this, we need to add a premise: That if the mind were made out of matter, conscious states would have to be distributed over some group of components in some relevant way. Nevertheless, this thought experiment is meant to show that conscious states cannot be so distributed. Therefore, the conscious mind is not made out of matter. Calling the argument that James is using is the Unity Argument. Clearly, the idea that our consciousness of, here, the parts of a sentence are unified is at the centre of the Unity Argument. Like the first, this argument goes all the way back to Descartes. Versions of it can be found in thinkers otherwise as different from one another as Leibniz, Reid, and James. The Unity Argument continued to be influential into the 20th century. That the argument was considered a powerful reason for concluding that the mind is not the body is illustrated in a backhanded way by Kant's treatment of it (as he found it in Descartes and Leibniz, not James, of course).

Kant did not think that we could uncover anything about the nature of the mind, including whether nor is it made out of matter. To make the case for this view, he had to show that all existing arguments that the mind is not material do not work and he set out to do just this in the chapter in the Critique of Pure Reason on the Paralogisms of Pure Reason (1781), paralogisms are faulty inferences about the nature of the mind. The Unity Argument is the target of a major part of that chapter; if one is going to show that we cannot know what the mind is like, we must dispose of the Unity Argument, which purports to show that the mind is not made out of matter. Kant's argument that the Unity Argument does not support dualism is simple. He urges that the idea of unified consciousness being achieved by something that has no parts or components are no less mysterious than its being achieved by a system of components acting together. Remarkably enough, though no philosopher has ever met this challenge of Kant's and no account exists of what an immaterial mind not made out of parts might be like, philosophers continued to rely on the Unity Argument until well into the 20th century. It may be a bit difficult for us to capture this now but the idea any system of components, and for an even stronger reason might not realize that merge with consciousness, that each system of material components, had a strong intuitive appeal for a long time.

The notion that consciousness agrees to unification and was in addition central to one of Kant's own famous arguments, his ‘transcendental deduction of the categories’. In this argument, boiled down to its essentials, Kant claims that to tie various objects of experience together into a single unified conscious representation of the world, something that he simply assumed that we could do, we could probably apply certain concepts to the items in question. In particular we have to apply concepts from each of four fundamental categories of concept: Quantitative, qualitative, relational, and what he called ‘modal’ concepts. Modal concept’s concern of whether an item might exist, does exist, or must exist. Thus, the four kinds of concept are concepts for how many units, what features, what relations to other objects, and what existence status is represented in an experience.

It was relational conceptual representation that most interested Kant and of relational concepts, he thought the concept of cause-and-effect to be by far the most important. Kant wanted to show that natural science (which for him meant primarily physics) was genuine knowledge (he thought that Hume's sceptical treatment of cause and effect relations challenged this status). He believed that if he could prove that we must tie items in our experience together causally if we are to have a unified awareness of them, he would have put physics back on ‘the secure path of a science.’ The details of his argument have exercised philosophers for more than two hundred years. We will not go into them here, but the argument illustrates how central the notion of the unity of consciousness was in Kant's thinking about the mind and its relation to the world.

Consciousness may possibly be the most challenging and pervasive source of problems in the whole of philosophy. Our own consciousness seems to be the most basic fact confronting us, yet it is almost impossible to say what consciousness is. Is mine like your? Is ours like that of animals? Might machines come to have consciousness? Is it possible for there to be disembodied consciousness? Whatever complex biological and neural processes go backstage, it is my consciousness that provides the theatre where my experiences and thoughts have their existence, where my desires are felt and where my intentions are formed. But then how am I to conceive the ‘I,’ or self that is the spectator of this theatre? One of the difficulties in thinking about consciousness is that the problems seem not to be scientific ones: Leibniz remarked that if we could construct a machine that could think and feel, and blow it up to the size of a mill and thus be able to examine its working parts as thoroughly as we pleased, we would still not find consciousness and draw the conclusion that consciousness resides in simple subjects, not complex ones. Eve n if we are convinced that consciousness somehow emerges from the complexity of brain functioning, we many still feel baffled about the way the emergence takes place, or why it takes place in just the way it does.

The nature of the conscious experience has been the largest single obstacle to physicalism, behaviourism, and functionalism in the philosophy of mind: These are all views that according to their opponents, can only be believed by feigning permanent anaesthesin. But many philosophers are convinced that we can divide and conquer: We may make progress by breaking the subject into different skills and recognizing that rather than a single self or observer we would do better to think of a relatively undirected whirl of cerebral activity, with no inner theatre, no inner lights, ad above all no inner spectator.

A fundamental philosophical topic both for its central place in any theory of knowledge, and its central place in any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception. (1) It gives us knowledge of the world around us (2) We are conscious of that world by being aware of ‘sensible qualities,’ colours, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received (much of this complexity has been revealed by the difficulty of writing programs enabling commuters to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of there being a central, ghostly, conscious self. Fed information in the same way that a screen is fed information by a remote television camera. Once such a model is in place, experience will seem like a model getting between us and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is especially acuter when we consider the secondary qualities of colour, sound, tactile feelings, and taste, which can easily seem to have a purely private existence inside the perceiver, like sensations of pain. Calling such supposed items names like sense data or percepts exacerbate the tendency. But once the model is in place, the fist property, the perception gives us knowledge or the inner world around us, is quickly threatened, for there now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include scepticism and idealism.

A more hopeful approach is to claim that complexities of (3) and (4) explain how we can have direct acquaintances of the world, than suggesting that the acquaintance we do have at best an emendable indiction. It is pointed out that perceptions are not like sensations, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world as bing such-and-such a way, than to enjoy a mere modification of sensation. Nut. Such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining how we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than strange optional extra.

Even to be, that if one is without idea, one is without concept, and, in the same likeness that, if one is without concept he too is without idea. Idea (Gk., visible form) that may be a notion as if by stretching all the way from one pole, where it denotes a subjective, internal presence in the mind, somehow though t of as representing something about the world, to the other pole, where it represents an eternal, timeless unchanging form or concept: The concept of the number series or of justice, for example, thought of as independent objects of enquiry and perhaps of knowledge. These two poles are not distinct in meaning by the term kept, although they give rise to many problems of interpretation, but between them they define a space of philosophical problems. On the one hand, ideas are that with which we think. Or in Locke’s terms, whatever the mind may ne employed about in thinking Looked at that way they seem to be inherently transient, fleeting, and unstable private presence. On the other, ideas provide the way in which objective knowledge can ne expressed. They are the essential components of understanding and any intelligible proposition that is true must be capable of being understood. Plato’s theory of ‘Form’ is a celebration of the objective and timeless existence of ideas as concepts, and in this hand ideas are reified to the point where they make up the only real world, of separate and perfect models of which the empirical world is only a poor cousin, this doctrine, notably in the Timarus opened the way for the Neoplatonic notion of ideas as the thoughts of God. The concept gradually lost this other-worldly aspect, until after Descartes ideas become assimilated to whatever it is that lies in the mind of any thinking being.

Together with a general bias toward the sensory, so that what lies in the mind may be thought of as something like images, and a belief that thinking is well explained as the manipulation of images, this was developed by Locke, Berkeley, and Hume into a full-scale view of the understanding as the domain of images, although they were all aware of anomalies that were later regarded as fatal to this doctrine. The defects in the account were exposed by Kant, who realized that the understanding needs to be thought of more in terms of rules and organized principles than of any kind of copy of what is given in experience. Kant also recognized the danger of the opposite extreme (that of Leibniz) of failing to connect the elements of understanding with those of experience at all (Critique of Pure Reason).

It has become more common to think of ideas, or concepts as dependent upon social and especially linguistic structures, than the self-standing creatures of an individual mind, but the tension between the objective and the subjective aspects of the matter lingers on, for instance in debates about the possibility of objective knowledge, of indeterminacy in translation, and of identity between thoughts people entertain at one time and those that they entertain at another.

To possess a concept is able to deploy a term expressing it in making judgements: The ability connects with such things as recognizing when the term applies, and being able to understand the consequences of its application. The term ‘idea’ was formerly used in the same way, but is avoided because of its association with subjective mental imagery, which may be irrelevant to the possession of concept. In the semantics of Frége, a concept is the reference of a predicate, and cannot be referred to by a subject term. Frége regarded predicates as incomplete expressions for a function, such as, sine . . . or log . . . is incomplete. Predicates refer to concepts, which themselves are ‘unsaturated,’ and cannot be referred to by subject expressions (we thus get the paradox that the concept of a horse is not a concept). Although Frége recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.

Mental states have contents: A belief may have the content that I will catch the train, a hope may have the content that the prime minister will resign. A concept is something that is capable of being a constituent of such contents. More specifically, a concept is a way of thinking of something – a particular object, or property, or relation. Or another entity.

Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of May Smith, or as the person located in a certain room now. More generally, a concept ‘c’ is such-and-such without believing ‘d’ is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by ‘that . . . ‘ clauses, as in our opening examples, they will be capable of been true or false, depending on the way the world is.

Concepts are to be distinguished from stereotypes and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money, none the less, we can come to learn that Anthony Blunt, are historian and Surveyor of the Queen’s Picture, is a spy: We can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype association with the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may involve something like contemporary Western legal systems. But whether or not it would be correct, it is quite intelligible for someone to reject this conception by arguing that it does not adequately provide for the elements of fairness and respect that are required by the concept of justice.

A theory of a particular concept must be distinguished from a theory of the object or objects it picks out. The theory of the concept is part of the theory of thought and epistemology: A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy - and perhaps even some of our contemporaries - are open to the accusation of not having fully respected the distinction between the two kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought ‘I think,’ containing the first-person way of thinking, to conclusions about the non-material nature of the object he himself was. But though the goals of a theory of concepts theory is required to have an adequate account to its relation to the other theory. A theory of concepts is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.

A fundamental question for philosophy is: What individuates a given concept - that is, what makes it the one is, than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a non-trivial answer to this question. An alternative addresses the question by stating from the ideas that a concept is individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose contents contain it as a constituent. So to take a simple case, on e could propose the logical concept ‘and’ is individuated by this conditions: It is the unique concept ‘C’ to possess which a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any to premisses ‘A’ and ‘B,’ ‘ABC’ can be inferred: And from any premiss ‘ABC,’ each of ‘A’ and ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker find specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are based on perception that individuates a concept by saying what is required for a thinker to possess it can be described as giving the possession condition for the concept.

A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ does not. We can also expect to use relatively observational concepts in specifying the kind of experiences that have to be of comment in the possession condition for relatively observational concepts. We must avoid, as mentioned of the concept in question as such, within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession, in talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That a thinker’s mastery of a concept is inextricably tied to how he finds it natural to go on in new cases in applying the concept.

Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the other. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 so-and-so’s, there is 1 so-and-so, . . . : And the family consisting of the concepts ‘belief’ ad ‘desire.’ Such families have come to be known as ‘local holisms.’ A local Holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for a thinker to possess them is to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated, and the possession conditions for concepts higher in ranking must presuppose only possession of concepts at the same or lower level in the ranking.

A possession condition may in various ways make a thinker’s possession of a particular concept dependents upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience e to the subject’s environment. If this is so, then, more is of mention, that it is much greater of the experiences in a possession condition will make possession of that concept dependent in particular upon the environmental relations of the thinker. Also, from intuitive particularities, that evens though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.

Concepts have a normative dimension, a fact strongly emphasized by Kriple. For any judgement whose content involves s a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into the territory of a thinker’s reason for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’: It does not by itself give him good reason for judging ‘Rostropovich is bald,’ even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts. One approach to these matters is to look to the possession condition for a concept, and consider how the referent of the concept is fixed from it, together with the world. One proposal is that the referent if the concept is that object (or property, or function, . . . ) which makes the practices of judgement and inference in the possession condition always lead to true judgements and truth-preserving inferences. This proposal would explain why certain reasons are necessarily good reasons for judging given contents. Provided the possession condition permit s us to say what it is about a thinker’s previous judgements that make it the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow us to say how the correctness condition is determined for a newly encountered object. The judgement is correct if t he new object has the property that in fact makes the judgmental practices in the possession condition yield true judgements, or truth-preserving inferences.

Despite the fact that the unity of consciousness had been at the centre of pre-20th century research on the mind, early in the 20th century the notion almost disappeared. Logical atomism in philosophy and behaviourism in psychology were both unsympathetic to the notion. Logical atomism focussed on the atomic elements of cognition (sense data, simple propositional judgments, etc.), rather than on how these elements are tied together to form a mind. Behaviourism urged that we focus on behaviour, the mind being alternatively myth or something otherwise that we cannot and do not need of studying the mysteriousness of science, from which brings meaning and purpose to humanity. This attitude extended to consciousness, of course. The philosopher Daniel Dennett summarizes the attitude prevalent at the time this way: Consciousness may be the last bastion of occult properties, epiphenomena, immeasurable subjective states - in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of ‘phenomenology’ into a respectable theory.

The unity of consciousness next became an object of serious attention in analytic philosophy only as late as the 1960s. In the years since, new work has appeared regularly. The accumulated literature is still not massive but the unity of consciousness has again become an object of serious study. Before we examine the more recent work, we need to explicate the notion in more detail than we have done so far and introduce some empirical findings. Both are required to understand recent work on the issue.

To expand on our earlier notion of the unity of consciousness, we need to introduce a pair of distinctions. Current works on consciousness labours under a huge, confusing terminology. Different theorists exchange dialogue over the excess consciousness, phenomenal consciousness, self-consciousness, simple consciousness, creature consciousness, states consciousness, monitoring consciousness, awareness as equated with consciousness, awareness distinguished from consciousness, higher orders thought, higher orders experience, qualia, the felt qualities of representations, consciousness as displaced perception, . . . and on and on and on. We can ignore most of this profusion but we do need two distinctions: between consciousness of objects and consciousness of our representations of objects, and between consciousness of representations and consciousness of self.

It is very natural to think of self-consciousness or, cognitive state more accurately, as a set of cognitive states. Self-knowledge is an example of such a cognitive state. There are plenty of things that I know bout self. I know the sort of thing I am: a human being, a warm-blooded rational animal with two legs. I know of many properties and much of what is happening to me, at both physical and mental levels. I also know things about my past, things I have done and that of whom I have been with other people I have met. But I have many self-conscious cognitive states that are not instances of knowledge. For example, I have the capacity to plan for the future - to weigh up possible courses of action in the light of goals, desires, and ambitions. I am capable of ca certain type of moral reflection, tide to moral self-and understanding and moral self-evaluation. I can pursue questions like, what sort of person I am? Am I the sort of person I want to be? Am I the sort of individual that I ought to be? This is my ability to think about myself. Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employing in my thought about other people and other objects.

When I say that I am a self-conscious creature, I am saying that I can do all these things. But what do they have in common? Could I lack some and still be self-conscious? These are central questions that take us to the heart of many issues in metaphysics, the philosophy of mind, and the philosophy of psychology.

Even so, with the range of putatively self-conscious cognitive states, one might naturally assume that there is a single ability that all presuppose. This is my ability to think about myself. I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autobiographical memories and moral self-understanding.

The proposing account would be on par with other noted examples of the deflationary account of self-consciousness. If, in at all, a straightforward explanation to what makes those of the ‘self contents’ immune to error through misidentification concerning the semantics of self, then it seems fair to say that the problem of self-consciousness has been dissolved, at least as much as solved.

This proposed account would be on a par with other noted examples as such as the redundancy theory of truth. That is to say, the redundancy theory or the deflationary view of truth claims that the predicate ‘ . . . true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophic enquiry. The approach admits of different versions, but centres on the pints (1) that ‘it is true that p’ says no more nor less than ‘p’ (so, redundancy’) (2) that in less direct context, such as ‘everything he said was true’, or ‘all logical consequences of true propositions as true’, the predicated functions as a device enabling us to generalize rather than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true propositions. For example, its translation is to infer that: (∀p, Q)(P & p ➞ q ➞ q)’ where there is no use of a notion of truth.

There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as . . . ‘science aims at the truth’ or ‘truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited ‘objective’ concept ion of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed within mention of truth: Science wants to be so that whenever science holds that ‘p’, when ‘p’‘. Discourse is to be regulated by the principle that it is wrong to assert ‘p’. When not-p.

It is important to stress how redundancy or the deflationary theory of self-consciousness, and any theory of consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that ha governed much of the development of analytical philosophy. This is the principle that the philosophical analysis of thought can only proceed through the philosophical analysis of language:

Thoughts differ from all else that is aid to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my thought is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed. We communicate thought by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the employment of language, unaided by any supposed contact between mind and the senses that they carry. In order to analyses thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp. (Dummett, 1978)

So how can such thoughts be entertained by a thinker incapable of reflexively referring to himself as English speakers do with the first-person pronoun be plausibly ascribed thought with first-person contents? The thought that, despite all this, there are in fact first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.

The best developed functionalist theory of self-reference has been deployed by Hugh Mellor (1988-1989). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms as subjective belief, which is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. Mellor starts from the functionalist premise that beliefs are causal functions from desires to actions. It is, of course, the emphasis on causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief, since ‘agency entails neither linguistic ability nor conscious belief. The idea that beliefs are causal functions from desires to actions can be deployed to explain the content of a give n belief through which the equation of truth conditions and utility conditions, where utility conditions are those in which the actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. To expound forthwith, consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat what is in front of it at that time. The utility condition of that belief is that there is food in front of it at that time. The utility condition of that belief is that there is food in from it of ‘x’ at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’, b/(p) must be a belief that ‘x’ has at ‘t’. Therefore, the utility/truth conditions of b/(p) is that whatever creature has this belief faces food when it is in fact facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be ‘I am facing food now.’ On the other hand, however, a belief that would naturally be expressed wit these words can be ascribed to a non-linguistic creature, because what makes it the belief that it is depending not on whether it can be linguistically expressed but on how it affects behaviour.

For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry: A disposition which causal contiguity ensures that only my simultaneous hunger can provide, and only into making me eat, and only then. That’s what makes my belief refers to me and to when I have it. And that’s why I need have no idea who I am or what the time is, no concept of the self or of the present, no implicit or explicit grasp of any ‘sense’ of ‘I’ or ‘now,’ to fix the reference of my subjective belies: Causal contiguity fixes them for me.

Causal contiguities, according to explanation may well be to why no internal representation of the self is required, even at what other philosophers have called the sub-personal level. Mellor believes that reference to distal objects can take place when in internal state serves as a causal surrogate for the distal object, and hence as an internal representation of that object. No such causal surrogate, and hence no such internal representation, is required in the case of subjective beliefs. The relevant casual component of subjective belies are the believer and the time.

The necessary contiguity of cause and effect is also the key to =the functionalist account of self-reference in conscious subjective belief. Mellor adopts a relational theory of consciousness, equating conscious beliefs with second-order beliefs to the effect that one is having a particular first-order subjective belief, it is, simply a fact about our cognitive constitution that these second-order beliefs are reliably, though of course fallibly, generated so that we tend to believe that we believe things that we do in fact believe.

The contiguity law in Leibniz, extends the principles that there are no discontinuous changes in nature, ‘natura non facit saltum,’ nature makes no leaps. Leibniz was able to use the principle to criticize the mechanical system of Descartes, which would imply such leaps in some circumstances, and to criticize contemporary atomism, which implied discontinuous changes of density at the edge of an atom however, according to Hume the contiguity of evens is an important element in our interpretation of their conjunction for being causal.

Others attending to the functionalist points of view are it’s the advocate’s Putnam and Stellars, and its guiding principle is that we can define mental states by a triplet of relations: What typically situations to them, in of what effects them have on other mental states and what affects them have on behaviour. The definition need not take the form of a simple analysis, but if we could write down the totality of axioms, or postulates, or platitudes that govern our theories about what things are apt to cause (for example) a belief state, what effects it would have on a variety of other mental states, and what effect it us likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical notion. It would be implicitly defined by these theses. Functionalism is often compared with descriptions of a computer, since according to it mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlying hardware or ‘realization’ of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others are via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless imitate the functions that are cited. According to this criticism functionalism is too generous, and would count too many things as having minds. It is also queried whether functionalism is too parochial, able to see mental similarities only when there is causal similarity, when our actual practices of interpretation enable us to ascribe thoughts and desires to persons whose causal structure may be rather different from our own. It may then seem as though beliefs and desires can be ‘variably realized’ in causal architectures, just as much as they can be in different neurophysiological stares.

The anticipation, to revolve os such can find the tranquillity in functional logic and mathematics as function, a relation that auspicates members of one class ‘X’ with some unique member ‘y’ of another ‘Y.’ The associations are written as y = f(x), The class ‘X’ is called the domain of the function, and ‘Y’ its range. Thus ‘the father of x’ is a function whose domain includes all people, and whose range is the class of male parents. Whose range is the class of male parents, but the relation ‘by that x’ is not a function, because a person can have more than one son. ‘Sine x’ is a function from angles of a circle Ï€x, is a function of its diameter x, . . . and so on. Functions may take sequences x1. . . .Xn as their arguments, in which case they may be thought of as associating a unique member of ‘Y’ with any ordered, n-tuple as argument. Given the equation y = f(x1 . . . Xn), x1 . . . Xn is called the independent variables, or argument of the function, and ‘y’ the dependent variable or value, functions may be many-one, meaning that differed not members of ‘X’ may take the same member of ‘Y’ as their value, or one-one when to each member of ‘X’ may take the same member of ‘Y’ as their value, or one-one when to each member of ‘X’ their corresponds a distinct member of ‘Y.’ A function with ‘X’ and ‘Y’ is called a mapping from ‘X’ to’Y’ is also called a mapping from ‘X’ to ‘Y,’ written f X ➝ Y, if the function is such that (1) If x, y ∈ X and f(x) = f(y) then x’s = y, then the function is an injection from to Y, if also: (2) If y ∈ Y, then (∃x)(x ∈ X & Y = f(x)). Then the function is a bi-jection of ‘X’ onto ‘Y.’ A di-jection is both an injection and a sir-jection where a subjection is any function whose domain is ‘X’ and whose range is the whole of ‘Y.’ Since functions ae relations a function may be defined asa set of ‘ordered’ pairs where ‘x’ is a member of ‘X’ sand ‘y’ of ‘Y.’

One of Frége’s logical insights was that a concept is analogous of a function, as a predicate analogous to the expression for a function (a functor). Just as ‘the square root of x’ takes you from one number to another, so ‘x is a philosopher’ refers to a function that takes us from his person to truth-values: True for values of ‘x’ who are philosophers, and false otherwise.’

Functionalism can be attached both in its commitment to immediate justification and its claim that all medially justified beliefs ultimately depend on the former. Though, in cases, is the latter that is the position’s weaker point, most of the critical immediately unremitting have been directed ti the former. As much of this criticism has ben directed against some particular from of immediate justification, ignoring the possibility of other forms. Thus much anti-foundationalist artillery has been derricked at the ‘myth of the given’ to consciousness in pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963) The most prominent general argument against immediate justifications is a whatever use taken does so if the subject is justified in supposing that the putative justifier has what it takes to do so. Hence, since the justification of the original belief depends on the justification of the higher level belief just specified, the justification is not immediate after all. We may lack adequate support for any such higher level as requirement for justification: And if it were imposed we would be launched on an infinite regress, for a similar requirement would hold equally for the higher belief that the original justifier was efficacious.

The reflexive considerations initiated by functionalism evoke an intelligent system, or mind, may fruitfully be thought of as the result of a number of sub-systems performing more simple tasks in co-ordination switch each other. The sub-systems may be envisaged as homunculi, or small, relatively stupid agents. The archetype is a digital computer, where a battery of switches capable of only one response (on or off) can make u a machine that can play chess, write dictionaries, etc.

Nonetheless, we are confronted with the range of putatively self-conscious cognitive states, one might assume that there is a single ability that is presupposed. This is my ability to think about myself, and I can only have knowledge about myself if I have beliefs about myself, and I can only have beliefs about myself if I can entertain thoughts about myself. The same can be said for autographical memories and moral self-understanding. These are ways of thinking about myself.

Of course, much of what I think when I think about myself in these self-conscious ways is also available to me to employ in my thoughts about other people and other objects. My knowledge that I am a human being deploys certain conceptual abilities that I can also deploy in thinking that you are a human being. The same holds when I congratulate myself for satisfying the exacting moral standards of autonomous moral agencies. This involves concepts and descriptions that can apply equally to me and to others. On the other hand, when I think about myself, I am also putting to work an ability that I cannot put to work in thinking about other people and other objects. This is precisely the ability to apply those concepts and descriptions to myself. It has become common to refer to this ability as the ability to entertain ‘I’-thoughts.

What is an, ‘I’-thought’ Obviously, an ‘I’-thought is a thought that involves self-reference. I can think an, ‘I’-thought only by thinking about myself. Equally obvious, though, this cannot be all that there is to say on the subject. I can think thoughts that involve a self-reference but are not ‘I’-thoughts. Suppose I think that the next person to set a parking ticket in the centre of Toronto deserves everything he gets. Unbeknown to be, the very next recipient of a parking ticket will be me. This makes my thought self-referencing, but it does not make it an ‘I’-thought. Why not? The answer is simply that I do not know that I will be the next person to get a parking ticket in downtown Toronto. Is ‘A’, is that unfortunate person, then there is a true identity statement of the form I = A, but I do not know that this identity holds, I cannot be ascribed the thoughts that I will deserve everything I get? And si I am not thinking genuine ‘I’-thoughts, because one cannot think a genuine ‘I’-thought if one is ignorant that one is thinking about oneself. So it is natural to conclude that ‘I’-thoughts involve a distinctive type of self-reference. This is the sort of self-reference whose natural linguistic expression is the first-person pronoun ‘I,’ because one cannot be the first-person pronoun without knowing that one is thinking about oneself.

This is still not quite right, however, because thought contents can be specific, perhaps, they can be specified directly or indirectly. That is, all cognitive states to be considered, presuppose the ability to think about oneself. This is not only that they all have to some commonality, but it is also what underlies them all. We can see is more detail what this suggestion amounts to. This claim is that what makes all those cognitive states modes of self-consciousness is the fact that they all have content that can be specified directly by means of the first person pronoun ‘I’ or indirectly by means of the direct reflexive pronoun ‘he,’ such they are first-person contents.

The class of first-person contents is not a homogenous class. There is an important distinction to be drawn between two different types of first-person contents, corresponding to two different modes in which the first person can be employed. The existence of this distinction was first noted by Wittgenstein in an important passage from The Blue Book: That there are two different cases in the use of the word ‘I’ (or, ‘my’) of which is called ‘the use as object’ and ‘the use as subject.’ Examples of the first kind of use are these’ ‘My arm is broken,’ ‘I have grown six inches,’ ‘I have a bump on my forehead,’ ‘The wind blows my hair about.’ Examples of the second kind are: ‘I see so-and-so,’ ‘I try to lift my arm,’ ‘I think it will rain,’ ‘I have a toothache.’ (Wittgenstein 1958)

The explanations given are of the distinction that hinge on whether or not they are judgements that involve identification. However, one can point to the difference between these two categories by saying: The cases of the first category involve the recognition of a particular person, and there is in these cases the possibility of an error, or as: The possibility of can error has been provided for . . . It is possible that, say in an accident, I should feel a pain in my arm, see a broken arm at my side, and think it is mine when really it is my neighbour’s. And I could, looking into a mirror, mistake a bump on his forehead for one on mine. On the other hand, there is no question of recognizing when I have a toothache. To ask ‘are you sure that it is you who have pains?’ would be nonsensical (Wittgenstein, 1958?).

Wittgenstein is drawing a distinction between two types of first-person contents. The first type, which is describes as invoking the use of ‘I’ as object, can be analysed in terms of more basic propositions. Such that the thought ‘I am B’ involves such a use of ‘I.’ Then we can understand it as a conjunction of the following two thoughts’ ‘a is B’ and ‘I am a.’ We can term the former a predication component and the latter an identification component (Evans 1982). The reason for braking the original thought down into these two components is precisely the possibility of error that Wittgenstein stresses in the second passages stated. One can be quite correct in predicating that someone is B, even though mistaken in identifying oneself as that person.

To say that a statement ‘a is B’ is subject to error through misidentification relative to the term ‘a’ means the following is possible: The speaker knows some particular thing to be ‘B,’ but makes the mistake of asserting ‘a is B’ because, and only because, he mistakenly thinks that the thing he knows to be ‘B’ is what ‘a’ refers to (Shoemaker 1968).

The point, then, is that one cannot be mistaken about who is being thought about. In one sense, Shoemaker’s criterion of immunity to error through misidentification relative to the first-person pronoun (simply ‘immunity to error through misidentification’) is too restrictive. Beliefs with first-person contents that are immune to error through identification tend to be acquired on grounds that usually do result in knowledge, but they do not have to be. The definition of immunity to error trough misidentification needs to be adjusted to accommodate them by formulating it in terms of justification rather than knowledge.

The connection to be captured is between the sources and grounds from which a belief is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents being picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that are immune to error through misidentification is evidence base from which they are derived, or the information on which they are based. So, to take by example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.

To say that a statement ‘a is b’ is subject to error through misidentification relative to the term ‘a’ means that some particular thing is ‘b,’ because his belief is based on an appropriate evidence base, but he makes the mistake of asserting ‘a is b’ because, and only because, he mistakenly thinks that the thing he justified believes to be ‘b’ is what ‘a’ refers to.

Beliefs with first-person contents that are immune to error through misidentification tend to be acquired on grounds that usually result in knowledge, but they do not have to be. The definition of immunity to error through misidentification needs to be adjusted to accommodate by formulating in terms of justification rather than knowledge. The connection to be captured is between the sources and grounds from which a beef is derived and the justification there is for that belief. Beliefs and judgements are immune to error through misidentification in virtue of the grounds on which they are based. The category of first-person contents picked out is not defined by its subject matter or by any points of grammar. What demarcates the class of judgements and beliefs that ae immune to error through misidentification is the evidence base from which they are derived, or the information on which they are based. For example, my thought that I have a toothache is immune to error through misidentification because it is based on my feeling a pain in my teeth. Similarly, the fact that I am consciously perceiving you makes my belief that I am seeing you immune to error through misidentification.

A suggestive definition is to say that a statement ‘a is b’ is subject to error through misidentification relative to the term ‘a’ means that the following is possible: The speaker is warranted in believing that some particular thing is ‘b,’ because his belief is based on an appropriate evidence base, but he makes the mistake of asserting ‘a is b’ because, and only because, he mistakenly thinks that the thing he justified believes to be ‘b’ is what ‘a’ refers to.

First-person contents that are immune to error through misidentification can be mistaken, but they do have a basic warrant in virtue of the evidence on which they are based, because the fact that they are derived from such an evidence base is closely linked to the fact that they are immune to error thought misidentification. Of course, there is room for considerable debate about what types of evidence base ae correlated with this class of first-person contents. Seemingly, then, that the distinction between different types of first-person content can be characterized in two different ways. We can distinguish between those first-person contents that are immune to error through misidentification and those that are subject to such error. Alternatively, we can discriminate between first-person contents with an identification component and those without such a component. For purposes rendered, in that these different formulations each pick out the same classes of first-person contents, although in interestingly different ways.

All first-person consent subject to error through misidentification contain an identification component of the form ‘I am a’ and employ of the first-person-pronoun contents with an identification component and those without such a component. In that identification component, does it or does it not have an identification component? Clearly, then, on pain of an infinite regress, at some stage we will have to arrive at an employment of the first-person pronoun that does not have to arrive at an employment of the first-person pronoun that does not presuppose an identification components, then, is that any first-person content subject to error through misidentification will ultimately be anchored in a first-person content that is immune to error through misidentification.

It is also important to stress how self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the first-person pronoun, is motivated by an important principle that has governed much if the development of analytical philosophy. This is the principle that the philosophical analysis of though can only proceed through the principle analysis of language. The principle has been defended most vigorously by Michael Dummett.

Even so, thoughts differ from that is said to be among the contents of the mind in being wholly communicable: It is of the essence of thought that I can convey to you the very thought that I have, as opposed to being able to tell you merely something about what my though is like. It is of the essence of thought not merely to be communicable, but to be communicable, without residue, by means of language. In order to understand thought, it is necessary, therefore, to understand the means by which thought is expressed (Dummett 1978).

Dummett goes on to draw the clear methodological implications of this view of the nature of thought: We communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principles governing the use of language, it is these principles, which relate to what is open to view in the mind other than via the medium of language that endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Many philosophers would want to dissent from the strong claim that the philosophical analysis of thought through the philosophical analysis of language is the fundamental task of philosophy. But there is a weaker principle that is very widely held as The Thought-Language Principle.

As it stands, the problem between to different roles that the pronoun ‘he’ can play of such oracle clauses. On the one hand, ‘he’ can be employed in a proposition that the antecedent of the pronoun (i.e., the person named just before the clause in question) would have expressed using the first-person pronoun. In such a situation that holds that ‘he,’ is functioning as a quasi-indicator. Then when ‘he’ is functioning as a quasi-indicator, it be written as ‘he.’ Others have described this as the indirect reflexive pronoun. When ‘he’ is functioning as an ordinary indicator, it picks out an individual in such a way that the person named just before the clause of o reality need not realize the identity of himself with that person. Clearly, the class of first-person contents is not homogenous class.

There is canning obviousness, but central question that arises in considering the relation between the content of thought and the content of language, namely, whether there can be thought without language as theories like the functionalist theory. The conception of thought and language that underlies the Thought-Language Principe is clearly opposed to the proposal that there might be thought without language, but it is important to realize that neither the principle nor the considerations adverted to by Dummett directly yield the conclusion that there cannot be that in the absence of language. According to the principle, the capacity for thinking particular thoughts can only be analysed through the capacity for linguistic expression of those thoughts. On the face of it, however, this does not yield the claim that the capacity for thinking particular thoughts cannot exist without the capacity for their linguistic expression.

Thoughts being wholly communicable not entail that thoughts must always be communicated, which would be an absurd conclusion. Nor does it appear to entail that there must always be a possibility of communicating thoughts in any sense in which this would be incompatible with the ascription of thoughts to a nonlinguistic creature. There is, after all, a primary distinction between thoughts being wholly communicable and it being actually possible to communicate any given thought. But without that conclusion there seems no way of getting from a thesis about the necessary communicability of thought to a thesis about the impossibility of thought without language.

A subject has distinguished self-awareness to the extent that he is able to distinguish himself from the environment and its content. He has distinguished psychological self-awareness to the extent that he is able to distinguish himself as a psychological subject within a contract space of other psychological subjects. What does this require? The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these two elements must be considered together emerges from a point made in the richness of the self-awareness that accompanies the capacity to distinguish the self from the environment is directly proportion are to the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinction from the physical enjoinment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of a an objects that have both primary and secondary qualities, but thee is n reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

First, to take a step back from primitive self-consciousness to consider the account of self-identifying first-person thoughts as given in Gareth Evans’s Variety of Reference (1982). Evens places considerable stress on the connection between the form of self-consciousness that he is considering and a grasp of the spatial nature of the world. As far as Evans is concerned, the capacity to think genuine first-person thought implicates a capacity for self-location, which he construes in terms of a thinker’s to conceive of himself as an idea with an element of the objective order. Thought, do not endorse the particular gloss that Evans puts on this, the general idea is very powerful. The relevance of spatiality to self-consciousness comes about not merely because he world is spatial but also because the self-consciousness subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world. Evans tends to stress a dependence in the opposite direction between these notions

The very idea of a perceived objective spatial world brings with it the ideas of the subject for being in the world, which the course of his perceptions due to his changing position in the world and to the more or less stable in the way of the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive (Evans 1982).

But the main criteria of his work is very much that the dependence holds equally in the opposite direction.

It seems that this general idea can be extrapolated and brought to bar on the notion of a non-conceptual point of view. What binds together the two apparently discrete components of a non-conceptual point of view is precisely the fact that a creature’s self-awareness must be awareness of itself as a spatial bing that acts up and is acted upon by the spatial world. Evans’s own gloss on how a subject’s self-awareness, is awareness of himself as a spatial being involves the subject’s mastery of a simple theory explaining how the world makes his perceptions as they are, with principles like ‘I perceive such and such, such and such holds at P; So (probably) am P and ‘I am, such who does not hold at P, so I cannot really be perceiving such and such, even though it appears that I am’ (Evans 1982). This is not very satisfactory, though. If the claim is that the subject must explicitly hold these principles, then it is clearly false. If, on the other hand, the claim is that these are the principles of a theory that a self-conscious subject must tacitly know, then the claim seems very uninformative in the absence of a specification of the precise forms of behaviour that can only be explained by there ascription of such a body of tacit knowledge. We need an account of what it is for a subject to be correctly described as possessing such a simple theory of perception. The point however, is simply that the notion of as non-conceptual point of view as presented, can be viewed as capturing, at a more primitive level, precisely the same phenomenon that Evans is trying to capture with his notion of a simple theory of perception.

But it must not be forgotten that a vital role in this is layed by the subject’s own actions and movements. Appreciating the spatiality of the environment and one’s place in it is largely a function of grasping one’s possibilities for action within the environment: Realizing that f one wants to return to a particular place from here one must pass through these intermediate places, or that if there is something there that one wants, one should take this route to obtain it. That this is something that Evans’s account could potentially overlook emerge when one reflects that a simple theory of perception of the form that described could be possessed and decoyed by a subject that only moves passively, in that it incorporates the dimension of action by emphasizing the particularities of navigation.

Moreover, stressing the importance of action and movement indicates how the notion of a non-conceptual point of view might be grounded in the self-specifying in for action to be found in visual perception. By that in thinking particularly of the concept of an affordance so central to Gibsonian theories of perception. One important type of self-specifying information in the visual field is information about the possibilities for action and reaction that the environment affords the perceiver, by which that affordancs are non-conceptual first-person contents. The development of a non-conceptual point of view clearly involves certain forms of reasoning, and clearly, we will not have a full understanding of he notion of a non-conceptual point of view until we have an explanation of how this reasoning can take place. The spatial reasoning involved in over which this reasoning takes place. The spatial reasoning involved in developing a non-conceptual point of view upon the world is largely a matter of calibrating different affordances into an integrated representation of the world.

In short, any learned cognitive ability be contractible out of more primitive abilities already in existence. There are good reason to think that the perception of affordance is innate. And so if, the perception of affordances is the key to the acquisition of an integrated spatial representation of the environment via the recognition of affordance symmetries, affordance transitives, and affordance identities, then it is precisely conceivable that the capacities implicated in an integrated representation of the world could emerge non-mysteriously from innate abilities.

Nonetheless, there are many philosophers who would be prepared to countenance the possibility of non-conceptual content without accepting that to use the theory of non-conceptual content so solve the paradox of self-consciousness. This is ca more substantial task, as the methodology that is adapted rested on the first of the marks of content, namely that content-bearing states serve to explain behaviour in situations where the connections between sensory input and behaviour output cannot be plotted in a law-like manner (the functionalist theory of self-reference). As such, not of allowing that every instance of intentional behaviour where there are no such law-like connections between sensory input and behaviour output needs to be explained by attributing to the creature in question of representational states with first-person contents. Even so, many such instances of intentional behaviour do need to be explained in this way. This offers a way of establishing the legitimacy of non-conceptual first-person contents. What would satisfactorily demonstrate the legitimacy of non-conceptual first-person contents would be the existence of forms of behaviour in pre-linguistic or non-linguistic creatures for which inference to the best understanding or explanation (which in this context includes inference to the most parsimonious understanding, or explanation) demands the ascription of states with non-conceptual first-person contents.

The non-conceptual first-person contents and the pick-up of self-specifying information in the structure of exteroceptive perception provide very primitive forms of non-conceptual self-consciousness, even if forms that can plausibly be viewed as in place rom. birth or shortly afterward. The dimension along which forms of self-consciousness must be compared is the richest of the conception of the self that they provide. All of which, a crucial element in any form of self-consciousness is how it enables the self-conscious subject to distinguish between self and environment - what many developmental psychologists term self-world dualism. In this sense, self-consciousness is essentially a contrastive notion. One implication of this is that a proper understanding of the richness of the conception that we take into account the richness of the conception of the environment with which it is associated. In the case of both somatic proprioception and the pick-up of self-specifying information in exteroceptive perception, there is a relatively impoverished conception of the environment. One prominent limitation is that both are synchronic than diachronic. The distinction between self and environment that they offer is a distinction that is effective at a time but not over time. The contrast between propriospecific and exterospecific invariant in visual perception, for example, provides a way for a creature to distinguish between itself and the world at any given moment, but this is not the same as a conception of oneself as an enduring thing distinguishable over time from an environment that also endures over time.

The notion of a non-conceptual point of view brings together the capacity to register one’s distinctness from the physical environment and various navigational capacities that manifest a degree of understanding of the spatial nature of the physical environment. One very basic reason for thinking that these elements must be considered together emerges from a point made from which the richness of the awareness of the environment from which the self is being distinguished. So no creature can understand its own distinctness from the physical environment without having an independent understanding of the nature of the physical environment, and since the physical environment is essentially spatial, this requires an understanding of the spatial nature of the physical environment. But this cannot be the whole story. It leaves unexplained why an understanding should be required of this particular essential feature of the physical environment. Afer all, it is also an essential feature of the physical environment that it be composed of objects that have both primary and secondary qualities, but there is no reflection of this in the notion of a non-conceptual point of view. More is needed to understand the significance of spatiality.

The general idea is very powerful, that the relevance of spatiality to self-consciousness comes about not merely because the world is spatial but also because the self-conscious subject is himself a spatial element of the world. One cannot be self-conscious without being aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world, and one cannot be aware that one is a spatial element of the world without a grasp of the spatial nature of the world.

The very idea of a perceivable, objectively spatial would be the idea of the subject for being in the world, with the course of his perceptions due to his changing position in the world and to the more or less stable way the world is. The idea that there is an objective world and the idea that the subject is somewhere cannot be separated, and where he is given by what he can perceive.

One possible reaction to consciousness, is that it is only because unrealistic and ultimately unwarranted requirements are being placed on what are to count as genuinely self-referring first-person thoughts. Suppose for such an objection will be found in those theories tat attempt to explain first-person thoughts in a way that does not presuppose any form of internal representation of the self or any form of self-knowledge. Consciousness arises because mastery of the semantics of he first-person pronoun is available only to creatures capable of thinking first-person thoughts whose contents involve reflexive self-reference and thus, seem to presuppose mastery of the first-person pronoun. If, thought, it can be established that the capacity to think genuinely first-person thoughts does not depend on any linguistic and conceptual abilities, then arguably the problem of circularity will no longer have purchase.

There is no account of self-reference and genuinely first-person thought that can be read in a way that poses just such a direct challenge to the account of self-reference underpinning the conscious. This is the functionalist account, although spoken before, the functionalist view, reflexive self-reference is a completely non-mysterious phenomenon susceptible to a functional analysis. Reflexive self-reference is not dependent upon any antecedent conceptual or linguistic skills. Nonetheless, the functionalist account of a reflexive self-reference is deemed to be sufficiently rich to provide the foundation for an account of the semantics of the first-person pronoun. If this is right, then the circularity at which consciousness is at its heart, and can be avoided.

The circularity problems at the root of consciousness arise because mastery of the semantics of the first-person pronoun requires the capacity to think fist-person thoughts whose natural expression is by means of the first-person pronoun. It seems clear that the circle will be broken if there are forms of first-person thought that are more primitive than those that do not require linguistic mastery of the first-person pronoun. What creates the problem of capacity circularity is the thought that we need to appeal to first-person contents in explaining mastery of the first-person pronoun, combined with the thought that any creature capable of entertaining first-person contents will have mastered the first-person pronoun. So if we want to retain the thought that mastery of the first-person pronoun can only be explained in terms of first-person contents, capacity circularity can only be avoided if there are first-person contents that do not presuppose mastery of the first-person pronoun.

On the other hand, however, it seems to follow from everything earlier mentioned about ‘I’-thoughts that conscious thought in the absence of linguistic mastery of the first-person pronoun is a contradiction in terms. First-person thoughts have first-person contents, where first-person contents can only be specified in terms of either the first-person pronoun or the indirect reflexive pronoun. So how could such thoughts be entertained by a thinker incapable of reflexive self-reference? How can a thinker who is not capable of reflexively reference? How can a thinker who is not the first-person pronoun be plausibly ascribed thoughts with first-person contents? The thought that, despite all this, there are real first-person contents that do not presuppose mastery of the first-person pronoun is at the core of the functionalist theory of self-reference and first-person belief.

The best developed functionalist theory of self-reference has been provided by Hugh Mellor (1988-1089). The basic phenomenon he is interested in explaining is what it is for a creature to have what he terms a ‘subjective belief,’ that is to say, a belief whose content is naturally expressed by a sentence in the first-person singular and the present tense. The explanation of subjective belief that he offers makes such beliefs independent of both linguistic abilities and conscious beliefs. From this basic account he constructs an account of conscious subjective beliefs and the of the reference of the first-person pronoun ‘I.’ These putatively more sophisticated cognitive states are casually derivable from basic subjective beliefs.

Mellor starts from the functionalist premise that beliefs are causal functions from desire to actions. It is, of course, the emphasis in causal links between belief and action that make it plausible to think that belief might be independent of language and conscious belief ‘agency entails neither linguistic ability nor conscious belief’ (Mellor 1988). The idea that beliefs are causal functions from desires to action can be deployed to explain the content of a given belief via the equation of truth conditions and utility conditions, where utility conditions are those in which are actions caused by the conjunction of that belief with a single desire result in the satisfaction of that desire. We can see how this works by considering Mellor’s own example. Consider a creature ‘x’ who is hungry and has a desire for food at time ‘t’. That creature has a token belief b/(p) that conjoins with its desire for food to cause it to eat that there food in front of ‘x at that time. Moreover, for b/(p) to cause ‘x’ to eat what is in front of it at ‘t’. b/(p) mus t be a belief that ‘x’ has at ‘t’. For Mellor, therefore, the utility/truth condition of b/(p) is that whatever creature has this belief faces when it is actually facing food. And a belief with this content is, of course, the subjective belief whose natural linguistic expression would be ‘I am facing food now’ on the other hand, however, belief that would naturally be expressed with these words can be ascribed to a non-linguistic creature, because what makes it te belief that it is depends no on whether it can be linguistically expressed but on how it affects behaviour.

What secures a self-reference in belief b/(p) is the contiguity of cause and effect. The essence of a subjective conjointly with a desire or set of desires, and the relevant sort of conjunction is possible only if it is the same agent at the same time who has the desire and the belief.

For in order to believe ‘p’, I need only be disposed to eat what I face if I feel hungry, a disposition which causal contiguity ensures that only my simultaneous hunger can provoke, and only into masking me eat, and only then.

Scientific knowledge is an extension of ordinary language into greater levels of abstraction and precision through reliance upon geometric and numerical relationships. We speculate that the seeds of the scientific imagination were planted in ancient Greece, as opposed to Chinese or Babylonian culture, partly because the social, political, and economic climate in Greece was more open to the pursuit of knowledge with marginal cultural utility. Another important factor was that the special character of Homeric religion allowed the Greeks to invent a conceptual framework that would prove useful in future scientific investigation, but this inheritance from Greek philosophy was wedded to some essential features in beliefs about the origin of the cosmos that the paradigm for classical physics emerged.

All the same, newer logical frameworks point to the logical condition for description and comprehension of experience such as to quantum physics. While normally referred to as the principle of complementarity, the use of the word principle was unfortunate in that complementarity is not a principle as that word is used in physics. A complementarity is rather a logical framework for the acquisition and comprehension of scientific knowledge that discloses a new relationship between physical theory and physical reality that undermines all appeals to metaphysics.

The logical conditions for description in quantum mechanics, the two conceptual components of classical causality, space-time description and energy-momentum conservation are all mutually exclusive and can be coordinated through the limitations imposed by Heisenberg’s indeterminacy principle.

The logical farmwork of complementarity is useful and necessary when the following requirements are met: (1) When the theory consists of two individually complete constructs: (2) when the constructs preclude one another in a description of the unique physical situation to which they both apply, (3) when both constitute a complete description of that situation. As we are to discover a situation, in which complementarity clearly applies, we necessarily confront an imposing limit to our knowledge of this situation. Knowledge can never be complete in the classical sense because we are able simultaneously to apply the mutual exclusive constructs that make up the complete description.

Why, then, must we use classical descriptive categories, like space-time descriptions and energy-momentum conservation, in our descriptions of quantum events? If classical mechanics is an approximation of the actual physical situation, it would seem to follow that classically; descriptive categories are not adequate to describe this situation. If, for example, quantities like position and momentum are abstractions with properties that are ‘definable and observable only through their interactions with other systems,’ why should we represent these classical categories as if they were actual quantities in physical theory and experiment? However, these categories are rarely discussed, but it carries some formidable implications for the future of scientific thought.

Nevertheless, our journeys through which the corses of times generations we historically the challenge when it became of Heidegger' theory of spatiality distinguishes that concludes to three different types of space: (1) world-space, (2) regions (Gegend), and (3) Dasein's spatiality. What Heidegger calls ‘world-space’ is space conceived as an ‘arena’ or ‘container’ for objects. It captures both our ordinary conception of space and theoretical space - in particular absolute space. Chairs, desks, and buildings exist ‘in’ space, but world-space is independent of such objects, much like absolute space ‘in which’ things exist. However, Heidegger thinks that such a conception of space is an abstraction from the spatializing conduct of our everyday activities. The things that we deal with are near or far relative to us; according to Heidegger, this nearness or farness of things is how we first become familiar with that which we (later) represent to ourselves as ‘space.’ This familiarity is what renders the understanding of space (in a ‘container’ metaphor or in any other way) possible. It is because we act spatially, going to places and reaching for things to use, that we can even develop a conception of abstract space at all. What we normally think of as space - world-space - turns out not to be what space fundamentally is; world-space is, in Heidegger's terminology, space conceived as vorhanden. It is an objectified space founded on a more basic space-of-action.

Since Heidegger thinks that space-of-action is the condition for world-space, he must explain the former without appealing to the latter. Heidegger's task then is to describe the space-of-action without presupposing such world-space and the derived concept of a system of spatial coordinates. However, this is difficult because all our usual linguistic expressions for describing spatial relations presuppose world-space. For example, how can one talk about the ‘distance between you and me’ without presupposing some sort of metric, i.e., without presupposing an objective access to the relation? Our spatial notions such as ‘distance,’ ‘location,’ etc. must now be redescribed from a standpoint within the spatial relation of self (Dasein) to the things dealt with. This problem is what motivates Heidegger to invent his own terminology and makes his discussion of space awkward. In what follows I will try to use ordinary language whenever possible to explain his principal ideas.

The space-of-action has two aspects: regions (space as Zuhandenheit) and Dasein's spatiality (space as Existentiale). The sort of space we deal within our daily activity is ‘functional’ or zuhanden, and Heidegger's term for it is ‘region.’ The places we work and live-the office, the park, the kitchen, etc.-all have different regions that organize our activities and contextualize ‘equipment.’ My desk area as my work region has a computer, printer, telephone, books, etc., in their appropriate ‘places,’ according to the spatiality of the way in which I work. Regions differ from space viewed as a ‘container’; the latter notion lacks a ‘referential’ organization with respect to our context of activities. Heidegger wants to claim that referential functionality is an inherent feature of space itself, and not just a ‘human’ characteristic added to a container-like space.

In our activity, how do we specifically stand with respect to functional space? We are not ‘in’ space as things are, but we do exist in some spatially salient manner. What Heidegger is trying to capture is the difference between the nominal expression ‘we exist in space’ and the adverbial expression ‘we exist spatially.’ He wants to describe spatiality as a mode of our existence rather than conceiving space as an independent entity. Heidegger identifies two features of Dasein's spatiality - ‘de-severance’ (Ent-fernung) and ‘directionality’ (Ausrichtung).

De-severance describes the way we exist as a process of spatial self-determination by ‘making things available’ to ourselves. In Heidegger's language, in making things available we ‘take in space’ by ‘making the farness vanish’ and by ‘bringing things close’

We are not simply contemplative beings, but we exist through concretely acting in the world - by reaching for things and going to places. When I walk from my desk area into the kitchen, I am not simply changing locations from point A to B in an arena-like space, but I am ‘taking in space’ as I move, continuously making the ‘farness’ of the kitchen ‘vanish,’ as the shifting spatial perspectives are opened as I go along.

This process is also inherently ‘directional.’ Every de-severing is aimed toward something or in a certain direction that is determined by our concern and by specific regions. I must always face and move in a certain direction that is dictated by a specific region. If I want to get a glass of ice tea, instead of going out into the yard, I face toward the kitchen and move in that direction, following the region of the hallway and the kitchen. Regions determine where things belong, and our actions are coordinated in directional ways accordingly.

De-severance, directionality, and regionality are three ways of describing the spatiality of a unified Being-in-the-world. As aspects of Being-in-the-world, these spatial modes of being are equiprimordial.9 10 Regions ‘refer’ to our activities, since they are established by our ways of being and our activities. Our activities, in turn, are defined in terms of regions. Only through the region can our de-severance and directionality be established. Our object of concern always appears in a certain context and place, in a certain direction. It is because things appear in a certain direction and in their places ‘there’ that we have our ‘here.’ We orient ourselves and organize our activities, always within regions that must already be given to us.

Heidegger's analysis of space does not refer to temporal aspects of Being-in-the-world, even though they are presupposed. In the second half of Being and Time he explicitly turns to the analysis of time and temporality in a discussion that is significantly more complex than the earlier account of spatiality. Heidegger makes the following five distinctions between types of time and temporality: (1) the ordinary or ‘vulgar’ conception of time; this is time conceived as Vorhandenheit. (2) world-time; this is time as Zuhandenheit. Dasein's temporality is divided into three types: (3) Dasein's inauthentic (uneigentlich) temporality, (4) Dasein's authentic (eigentlich) temporality, and (5) temporal originality or ‘temporality as such.’ The analyses of the vorhanden and zuhanden modes of time are interesting, but it is Dasein's temporality that is relevant to our discussion, since it is this form of time that is said to be founding for space. Unfortunately, Heidegger is not clear about which temporality plays this founding role.

We can begin by excluding Dasein's inauthentic temporality. This mode of time refers to our unengaged, ‘average’ way in which we regard time. It is the ‘past we forget’ and the ‘future we expect,’ all without decisiveness and resolute understanding. Heidegger seems to consider that this mode of temporality is the temporal dimension of de-severance and directionality, since de-severance and directionality deal only with everyday actions. As such, inauthentic temporality must itself be founded in an authentic basis of some sort. The two remaining candidates for the foundation are Dasein's authentic temporality and temporal originality.

Dasein's authentic temporality is the ‘resolute’ mode of temporal existence. Authentic temporality is realized when Dasein becomes aware of its own finite existence. This temporality has to do with one's grasp of his or her own life as a whole from one's own unique perspective. Life gains meaning as one's own life-project, bounded by the sense of one's realization that he or she is not immortal. This mode of time appears to have a normative function within Heidegger's theory. In the second half of BT he often refers to inauthentic or ‘everyday’ mode of time as lacking some primordial quality which authentic temporality possesses.

In contrast, temporal originality is the formal structure of Dasein's temporality itself. In addition to its spatial Being-in-the-world, Dasein also exists essentially as ‘projection.’ Projection is oriented toward the future, and this futural orientation regulates our concern by constantly realizing various possibilities. Temporality is characterized formally as this dynamic structure of ‘a future that makes present in the process of having been.’ Heidegger calls the three moments of temporality - the future, the present, and the past - the three ecstases of temporality. This mode of time is not normative but rather formal or neutral; as Blattner argues, the temporal features that constitute Dasein's temporality describe both inauthentic and authentic temporality.

There are some passages that indicate that authentic temporality is the primary manifestation of temporalities, because of its essential orientation toward the future. For instance, Heidegger states that ‘temporality first showed itself in anticipatory resoluteness.’ Elsewhere, he argues that ‘the ‘time’ which is accessible to Dasein's common sense is not primordial, but arises rather from authentic temporality.’ In these formulations, authentic temporalities is said to found other inauthentic modes. According to Blattner, this is ‘by far the most common’ interpretation of the status of authentic time.

However, to ague with Blattner and Haar, in that there are far more passages where Heidegger considers temporal originality as temporality as distinct from authentic temporality, and founding for it and for Being-in-the-world as well. Here are some examples: Temporality has different possibilities and different ways of temporalizing itself. The basic possibilities of existence, the authenticity and inauthenticity of Dasein, are grounded ontologically on possible temporalizations of temporality. Time is primordial as the temporalizing of temporality, and as such it makes possible the Constitution of the structure of care.

Heidegger's conception seems to be that it is because we are fundamentally temporal - having the formal structure of ecstatico-horizonal unity - that we can project, authentically or inauthentically, our concernful dealings in the world and exist as Being-in-the-world. It is on this account that temporality is said to found spatiality.

Since Heidegger uses the term ‘temporality’ rather than ‘authentic temporality’ whenever the founding relation is discussed between space and time, I will begin the following analysis by assuming that it is originary temporality that founds Dasein's spatiality. On this assumption two interpretations of the argument are possible, but both are unsuccessful given his phenomenological framework.

I will then consider the possibility that it is ‘authentic temporality’ which founds spatiality. Two interpretations are also possible in this case, but neither will establish a founding relation successfully. I will conclude that despite Heidegger's claim, an equiprimordial relation between time and space is most consistent with his own theoretical framework. I will now evaluate the specific arguments in which Heidegger tries to prove that temporality founds spatiality.

The principal argument, entitled ‘The Temporality of the Spatiality that is Characteristic of Dasein.’ Heidegger begins the section with the following remark: Though the expression `temporality' does not signify what one understands by ‘time’ when one talks about `space and time', nevertheless spatiality seems to make up another basic attribute of Dasein corresponding to temporality. Thus with Dasein's spatiality, existential-temporal analysis seems to come to a limit, so that this entity that we call ‘Dasein,’ must be considered as `temporal' `and' as spatial coordinately.

Accordingly, Heidegger asks, ‘Has our existential-temporal analysis of Dasein thus been brought to a halt . . . by the spatiality that is characteristic of Dasein . . . and Being-in-the-world?’ His answer is no. He argues that since ‘Dasein's constitution and its ways to being possible are ontologically only on the basis of temporality,’ and since the ‘spatiality that is characteristic of Dasein . . . belongs to Being-in-the-world,’ it follows that ‘Dasein's specific spatiality must be grounded in temporality.’

Heidegger's claim is that the totality of regions-de-severance-directionality can be organized and re-organized, ‘because Dasein as temporality is ecstatico-horizonal in its Being.’ Because Dasein exists futurally as ‘for-the-sake-of-which,’ it can discover regions. Thus, Heidegger remarks: ‘Only on the basis of its ecstatico-horizonal temporality is it possible for Dasein to break into space.’

However, in order to establish that temporality founds spatiality, Heidegger would have to show that spatiality and temporality must be distinguished in such a way that temporality not only shares a content with spatiality but also has additional content as well. In other words, they must be truly distinct and not just analytically distinguishable. But what is the content of ‘the ecstatic-horizonal constitution of temporality?’ Does it have a content above and beyond Being-in-the-world? Nicholson poses the same question as follows: Is it human care that accounts for the characteristic features of human temporality? Or is it, as Heidegger says, human temporality that accounts for the characteristic features of human care, serves as their foundation? The first alternative, according to Nicholson, is to reduce temporality to care: ‘the specific attributes of the temporality of Dasein . . . would be in their roots not aspects of temporality but reflections of Dasein's care.’ The second alternative is to treat temporality as having some content above and beyond care: ‘the three-fold constitution of care stems from the three-fold constitution of temporality.’

Nicholson argues that the second alternative is the correct reading.18 Dasein lives in the world by making choices, but ‘the ekstasis of temporality lies well prior to any choice . . . so our study of care introduces us to a matter whose scope outreaches care: the ekstases of temporality itself.’ Accordingly, ‘What was able to make clear is that the reign of temporal ekstasis over the choices we make accords with the place we occupy as finite beings in the world.’

But if Nicholson's interpretation is right, what would be the content of ‘the ekstases of temporality itself,’ if not some sort of purely formal entity or condition such as Kant's ‘pure intuition?’ But this would imply that Heidegger has left phenomenology behind and is now engaging in establishing a transcendental framework outside the analysis of Being-in-the-world, such that this formal structure founds Being-in-the-world. This is inconsistent with his initial claim that Being-in-the-world is itself foundational.

I believe Nicholson's first alternative offers a more consistent reading. The structure of temporality should be treated as an abstraction from Dasein's Being-in-the-world, specifically from care. In this case, the content of temporality is just the past and the present and the future ways of Being-in-the-world. Heidegger's own words support this reading: ‘as Dasein temporalizes itself, a world is too,’ and ‘the world is neither present-at-hand nor ready-to-hand, but temporalizes itself in temporality.’ He also states that the zuhanden ‘world-time, in the rigorous sense of the existential-temporal conception of the world, belongs to temporality itself.’ In this reading, ‘temporality temporalizing itself,’ ‘Dasein's projection,’ and ‘the temporal projection of the world’ are three different ways of describing the same ‘happening’ of Being-in-the-world, which Heidegger calls ‘self-directive.’

However, if this is the case, then temporality does not found spatiality, except perhaps in the trivial sense that spatiality is built into the notion of care that is identified with temporality. The content of ‘temporality temporalizing itself’ simply is the various openings of regions, i.e., Dasein's ‘breaking into space.’ Certainly, as Stroeker points out, it is true that ‘nearness and remoteness are spatio-temporal phenomena and cannot be conceived without a temporal moment.’ But this necessity does not constitute a foundation. Rather, they are equiprimordial. The addition of temporal dimensions does indeed complete the discussion of spatiality, which abstracted from time. But this completion, while it better articulates the whole of Being-in-the-world, does not show that temporality is more fundamental.

If temporality and spatiality are equiprimordial, then all of the supposedly founding relations between temporality and spatiality could just as well be reversed and still hold true. Heidegger's view is that ‘because Dasein as temporality is ecstatico-horizonal in its Being, it can take along with it a space for which it has made room, and it can do so factically and constantly.’ But if Dasein is essentially a factical projection, then the reverse should also be true. Heidegger appears to have assumed the priority of temporality over spatiality perhaps under the influence of Kant, Husserl, or Dilthey, and then based his analyses on that assumption.

However, there may still be a way to save Heidegger's foundational project in terms of authentic temporality. Heidegger never specifically mentions authentic temporality, since he suggests earlier that the primary manifestation of temporality is authentic temporality, such a reading may perhaps be justified. This reading would treat the whole spatio-temporal structure of Being-in-the-world. The resoluteness of authentic temporality, arising out of Dasein's own ‘Being-towards-death,’ would supply a content to temporality above and beyond everyday involvements.

Heidegger is said to have its foundations in resoluteness, Dasein determines its own Situation through anticipatory resoluteness, which includes particular locations and involvements, i.e., the spatiality of Being-in-the-world. The same set of circumstances could be transformed into a new situation with different significance, if Dasein chooses resolutely to bring that about. Authentic temporality in this case can be said to found spatiality, since Dasein's spatiality is determined by resoluteness. This reading moreover enables Heidegger to construct a hierarchical relation between temporality and spatiality within Being-in-the-world rather than going outside of it to a formal transcendental principle, since the choice of spatiality is grasped phenomenologically in terms of the concrete experience of decision.

Moreover, one might argue that according to Heidegger one's own grasp of ‘death’ is uniquely a temporal mode of existence, whereas there is no such weighty conception involving spatiality. Death is what makes Dasein ‘stand before itself in its own most potentiality-for-Being.’ Authentic Being-towards-death is a ‘Being toward a possibility - indeed, toward a distinctive possibility of Dasein itself.’ One could argue that notions such as ‘potentiality’ and ‘possibility’ are distinctively temporal, nonspatial notions. So ‘Being-towards-death,’ as temporal, appears to be much more ontologically ‘fundamental’ than spatiality.

However, Heidegger is not yet out of the woods. I believe that labelling the notions of anticipatory resoluteness, Being-towards-death, potentiality, and possibility specifically as temporal modes of being (to the exclusion of spatiality) begs the question. Given Heidegger's phenomenological framework, why assume that these notions are only temporal (without spatial dimensions)? If Being-towards-death, potentiality-for-Being, and possibility were ‘purely’ temporal notions, what phenomenological sense can we make of such abstract conceptions, given that these are manifestly our modes of existence as bodily beings? Heidegger cannot have in mind such an abstract notion of time, if he wants to treat authentic temporality as the meaning of care. It would seem more consistent with his theoretical framework to say that Being-towards-death is a rich spatio-temporal mode of being, given that Dasein is Being-in-the-world.

Furthermore, the interpretation that defines resoluteness as uniquely temporal suggests too much of a voluntaristic or subjectivistic notion of the self that controls its own Being-in-the-world as for its future. This would drive a wedge between the self and its Being-in-the-world, thereby creating a temporal ‘inner self’ which can decide its own spatiality. However, if Dasein is Being-in-the-world as Heidegger claims, then all of Dasein's decisions should be viewed as concretely grounded in Being-in-the-world. If so, spatiality must be an essential constitutive element.

Hence, authentic temporality, if construed narrowly as the mode of temporality, at first appears to be able to found spatiality, but it also commits Heidegger either to an account of time that is too abstract, or to the notion of the self far more like Sartre's than his own. What is lacking in Heidegger's theory that generates this sort of difficulty is a developed conception of Dasein as a lived body - a notion more fully developed by Merleau-Ponty.

The elements of a more consistent interpretation of authentic temporality are present in Being and Time. This interpretation incorporates a view of ‘authentic spatiality’ in the notion of authentic temporality. This would be Dasein's resolutely grasping its own spatio-temporal finitude with respect to its place and its world. Dasein is born at a particular place, but lives in a particular place, dies in a particular place, all of which it can relate to in an authentic way. The place Dasein lives is not a place of anonymous involvements. The place of Dasein must be there where its own potentiality-for-Being is realized. Dasein's place is thus a determination of its existence. Had Heidegger developed such a conception more fully, he would have seen that temporality is equiprimordial with thoroughly spatial and contextual Being-in-the-world. They are distinguishable but equally fundamental ways of emphasizing our finitude.

The internal tensions within his theory eventually leads Heidegger to reconsider his own positions. In his later period, he explicitly develops what may be viewed as a conception of authentic spatiality. For instance, in ‘Building Dwelling Thinking,’ Heidegger states that Dasein's relations to locations and to spaces inheres in dwelling, and dwelling is the basic character of our Being. The notion of dwelling expresses an affirmation of spatial finitude. Through this affirmation one acquires a proper relation to one's environment.

But the idea of dwelling is in fact already discussed in Being and Time, regarding the term ‘Being-in-the-world,’ Heidegger explains that the word ‘in’ is derived from ‘innan’ - to ‘reside,’ ‘habitare,’ ‘to dwell.’ The emphasis on ‘dwelling’ highlights the essentially ‘worldly’ character of the self.

Thus from the beginning Heidegger had a conception of spatial finitude, but this fundamental insight was undeveloped because of his ambition to carry out the foundational project that favoured time. From the 1930's on, as Heidegger abandons the foundational project focussing on temporality, the conception of authentic spatiality comes to the fore. For example, in Discourse on Thinking Heidegger considers the spatial character of Being as ‘that-which-regions (die Gegnet).’ The peculiar expression is a re-conceptualization of the notion of ‘region’ as it appeared in Being and Time. Region is given an active character and defined as the ‘openness that surrounds us’ which ‘comes to meet us.’ By giving it an active character, Heidegger wants to emphasize that region is not brought into being by us, but rather exists in its own right, as that which expresses our spatial existence. Heidegger states that ‘one needs to understand ‘resolve’ (Entschlossenheit) as it is understood in Being and Time: as the opening of man [Dasein] particularly undertaken by him for openness, . . . which we think of as that-which-regions.’ Here Heidegger is asserting an authentic conception of spatiality. The finitude expressed in the notion of Being-in-the-world is thus transformed into an authentic recognition of our finite worldly existence in later writings.

The return to the conception of spatial finitude in the later period shows that Heidegger never abandoned the original insight behind his conception of Being-in-the-world. But once committed to this idea, it is hard to justify singling out an aspect of the self -temporality - as the foundation for the rest of the structure. All of the existentiales and zuhanden modes, which constitute the whole of Being-in-the-world, are equiprimordial, each mode articulating different aspects of a unified whole. The preference for temporality as the privileged meaning of existence reflects the Kantian residue in Heidegger's early doctrine that he later rejected as still excessively subjectivistic.

Meanwhile, it seems that it is nonetheless, natural to combine this close connection with conclusions by proposing an account of self-consciousness, as to the capacity to think ‘I’-thoughts that are immune to error through misidentification, though misidentification varies with the semantics of the ‘self’ - this would be a redundant account of self-consciousness. Once we have an account of what it is to be capable of thinking ‘I’-thoughts, we will have explained everything distinctive about self-consciousness. It stems from the thought that what is distinctive about ‘I’-thoughts are that they are either themselves immune to error or they rest on further ‘I’ -Thoughts that are immune in that way.

Once we have an account of what it is to be capable of thinking thoughts that are immune to error through misidentification, we will have explained everything about the capacity to think ‘I’-thoughts. As it would to claim of deriving from the thought that immunity to error through misidentification depends on the semantics of the ‘self.’

Once, again, that when we have an account of the semantics in that we will have explained everything distinctive about the capacity to think thoughts that are immune to error through misidentification.

The suggestion is that the semantics of ‘self-ness’ will explain what is distinctive about the capacity to think thoughts immune to error through misidentification. Semantics alone cannot be expected to explain the capacity for thinking thoughts. The point in fact, that all that there is to the capacity of think thoughts that are immune tp error is the capacity to think the sort of thought whose natural linguistic expression involves the ‘self,’ where this capacity is given by mastery of the semantics of ‘self-ness.’ Yielding, to explain what it is to master the semantics of ‘self-ness,’ especially to think thoughts immune to error through misidentification.

On this view, the mastery of the semantics of ‘self-ness’ may be construed as for the single most important explanation in a theory of ‘self-consciousness.’

Its quickened reformulation might be put to a defender of ‘redundancy’ or the deflationary theory is how mastery of the semantics of ‘self-ness’ can make sense of the distinction between ‘self-ness contents’ that are immune to error through misidentification and the ‘self contents’ that lack such immunity. However, this is only an apparent difficulty when one remembers that those of the ‘selves’ content is immune to error through misidentification, because, those employing ‘’I’ as object, were able in having to break down their component elements. The identification component and the predication components that for which if the composite identification components of each are of such judgements that mastery of the semantics of ‘self-regulatory’ content must be called upon to explain. Identification component are, of course, immune to error through misidentification.

It is also important to stress how the redundancy and the deflationary theory of self-consciousness, and any theory of self-consciousness that accords a serious role in self-consciousness to mastery of the semantics of the ‘self-ness,’ are motivated by an important principle that has governed much of the development of analytical philosophy. The principle is the principle that the analysis of thought can only continue thought, the philosophical analysis of language such that we communicate thoughts by means of language because we have an implicit understanding of the workings of language, that is, of the principle governing the use of language: It is these principles, which relate to what is open to view and mind other that via the medium of language, which endow our sentences with the senses that they carry. In order to analyse thought, therefore, it is necessary to make explicitly those principles, regulating our use of language, which we already implicitly grasp.

Still, at the core of the notion of broad self-consciousness is the recognition of what consciousness is the recognition of what developmental psychologist’s call ‘self-world dualism.’ Any subject properly described as self-conscious must be able to register the distinction between himself and the world, of course, this is a distinction that can be registered in a variety of way. The capacity for self-ascription of thoughts and experiences, in combination with the capacity to understand the world as a spatial and causally structured system of mind-independent objects, is a high-level way of registering of this distinction.

Consciousness of objects is closely related to sentience and to being awake. It is (at least) being in somewhat of a distinct informational and behavioural intention where its responsive state is for one's condition as played within the immediateness of environmental surroundings. It is the ability, for example, to process and act responsively to information about food, friends, foes, and other items of relevance. One finds consciousness of objects in creatures much less complex than human beings. It is what we (at any rate first and primarily) have in mind when we say of some person or animal as it is coming out of general anaesthesia, ‘It is regaining consciousness’ as consciousness of objects is not just any form of informational access to the world, but the knowing about and being conscious of, things in the world.

No comments:

Post a Comment