COMPUTING MACHINERY AND REALITY

COMPUTING MACHINERY AND REALITY

By P. A. Beckman

© 2008

San Francisco State University

1600 Holloway Street

San Francisco, CA, USA   94132

pbeckman@sfsu.edu

415-338-6240

 

COMPUTING MACHINERY AND REALITY

1.         The Reality Imitation Game

            I propose to consider the question, “Can machines create reality?”

            Over 50 years ago, Alan Turing asked a very similar question: “Can machines think?”  The approach to this essay will apply Turing’s format and logic in an attempt to erect a framework for thinking about computing machines, virtual environments, and a method for testing the “realness” of virtual environments.

            Before considering the question directly, it is appropriate, as Turing did, to re-phrase it into a format more appropriate for the application of logic.  As Turing wished to avoid the difficulties of defining the words “machine” and “think”, I wish to avoid the difficulty of defining the word “reality”.  I wish also to avoid precisely defining the word “machine”, but will digress somewhat here to indicate the reasoning behind this omission.

            Since Turing’s time, great advances have been made in the construction of what Turing repeatedly referred to as “discrete-state machines”.  He went to great lengths to describe such a machine, which is the equivalent of today’s modern digital computer.  We work with the advantage of 50 additional years of research and education in the topic of digital computers.  I shall therefore omit the description of what comprises such a machine and how it works, assuming that a modern digital computer takes the place of Turing’s discrete-state machine and that interested readers are already familiar with such devices.

            Whereas the crux of Turing’s discussion focused on the concept of “intelligence”, this essay will focus on the concept of “reality”.  Under the same assumption that interested readers already know the details of the composition and internal processing of digital computers, I shall also not provide a precise definition of what constitutes a “reality”.  With the large number of “realities” available for interaction to the common computer user of today, I will leave it to the reader to provide the exact parameters of such a “reality”.  To give the reader a starting point in this direction, one may think of the World Wide Web as just such a “reality”.  For those whose thinking is based less in the logical and more in the perceptual, one may also consider advanced virtual reality computer games as examples of such “realities”.

            To circumvent the question “Can machines create reality?” I shall, as Turing did, re-phrase it into one more amenable to logical debate and inquiry.  Turing created what he called the “imitation game” (hereinafter referred to as the “intelligence imitation game”, or IIG), in which one might probe the level of “intelligence” of a machine.  The game was played in the following manner.  Two players, a man (A), and a woman (B), communicate independently with a third player, an interrogator (C), through the passing of messages.  Turing specified that the interrogator could be either a man or a woman; the choice does not affect the outcome of the game.  Neither A nor B can observe C, and vice versa.  All messages are presumed to be passed by text and not voice.  It is the goal of the interrogator to determine the gender of both A and B by the information obtained via the messages.  It is the goal of A (the man) to thwart the objective of C (the interrogator) through some subterfuge in the information he passes to C.  It is the goal of B (the woman) to aid C in accomplishing C’s objective.

            The game thus described is of minor interest, perhaps as a parlor game.  However, Turing moved closer to answering his original question by proposing that the part of A be played by a machine (meaning a discrete-state machine or a computer), and the part of B (the woman) be played by A (the man).  Now, both the man and the machine must “simulate” a woman.  His conjecture was that if a machine could successfully compete in this form of the imitation game, then such a machine could be considered to be intelligent, because it was as good as another human at simulating a type of human (a woman) that neither the machine nor the man actually was.

            Turing stipulated a few specific ground rules for his game.  He expected that the interrogator could not endlessly ask questions of A and B; his proposed time limit for questioning was 5 minutes.  Also, all physical externalities of A and B were to be hidden, so that C could only use information from the passed messages on which to base his/her decision.  In the 50 or so years since Turing’s paper, researchers have proposed extensions to Turing’s test that explicitly define this consideration of physical externalities (Saygin et al., 2000).

            Also, although he does not mention so, I believe that Turing would prefer that the interrogator only use his/her own powers of observation.  This means that the interrogator not be allowed the use of a tool or machine specifically developed or brought into the game for the intent of distinguishing A from B.  I shall add such a stipulation to the general ground rules of the IIG.

            With this basis, I propose a Reality Imitation Game (RIG).  While the precise details of the game differ slightly from the IIG, the overall structure is the same.  Before describing this new game in more detail, I must digress to introduce two more elements to the game.  The first element is a player in the game, and I shall name it “God’s reality”.  God’s reality is the real world (and universe, to be more general) in which we all function.  It includes physical entities, known physical laws that govern those entities, and even all unknown entities and physical laws.  The second element is also a player in the game, and I shall refer to it using the term MBVE, for “machine-based virtual environment”.  The MBVE is a “reality” similar to those mentioned, that is generated and maintained by a discrete-state machine.  It includes whatever entities and rules granted to it by its human creator.

            This new RIG is then played by God’s reality (A), a machine-based virtual environment (B), and an interrogator (C).  Both A and B pass information independently to and from C (most likely not simultaneously), not via textual messages, but by C’s interaction with each A and B.  It is the goal of C to determine which of the players A or B is God’s reality and which is the MBVE.  It is assumed that neither A nor B has a consciousness (a topic to be explored later in this essay), and therefore neither has a goal in the RIG.  Also as a result of A and B lacking consciousness, there is neither an attempt at subterfuge on the part of B nor an attempt by A to assist C in successfully playing the game.

            We may now, as Turing did, replace our original question, “Can machines create reality?” with a set of new questions that are more amenable to scientific probing.  Such questions take the form of, for example, “With an appropriately constructed and programmed machine, would an intelligent and perceptive interrogator be able to discern which of A or B was God’s reality and which the imposter?” and the follow-on question, “With what percentage of success would the interrogator be able to do so?”  This shall therefore be the framework by which we determine if the imposter is “real”, rather than a determination based, for example, on computational requirements, such as, “If a substantial amount of computation would be required to give us the illusion that a certain entity is real, then that entity is real” (Deutsch, 1997, p. 92).

2.         Critique of the New Problem

            I will dispense with the possibly relevant question asked by Turing, namely “Is this new question a worthy one to investigate?”  I beg off this question, and use as support, the enormous interest in and research surrounding Turing’s original question.  If this new question were of no significant importance, it is unlikely that Turing’s original question would have caused so much inspection, introspection, and even grief, within the realm of computer researchers.  As mentioned above, I will omit any discussion of the composition and programming of a digital computer, aside from suggesting, as Turing did, that the expected machine participant in the RIG is a discrete-state machine, or digital computer.  I will instead proceed directly to the objections that Turing anticipated against his original proposal, as he applied those objections to his IIG.

3.         Contrary Views on the Main Question

            Rather than make a specific prediction as to the approximate date on which a machine will pass the RIG (as Turing did for the IIG), I shall merely mention some of the more recent advances (circa 2001) in the abilities of digital computers to more fully participate in the RIG.  By citing such advances, I hope to convince the reader that, although current state-of-the-art machines (and programs) may not now successfully play the RIG, such a machine will come to exist.  This statement is fundamentally different from abstract quantum physics theories such as the “Turing principle” which suggests, “There exists an abstract universal computer whose repertoire includes any computation that any physically possible object can perform”, (Deutsch, 1997, p. 132).  Such theories are appropriate to apply to virtual reality theory; herein we consider virtual reality applications.

            We may be many years still separated from a machine that can successfully play the RIG.  However, based on the hardware and software technologies currently available to the average computer user/owner (not even considering the “above average computer user/owner” such as the U.S. military or large research laboratories), I predict that the RIG will be passed in less than many lifetimes.

            I incorporate the adherence to Turing’s ground rules into my prediction.  Specifically, the interrogator may not spend endless amounts of time in each world, and the interrogator must use only his/her powers of observation.  These rules are present to prevent, for example, the interrogator from basing their decision on some external factor such as the computer overheating and crashing.  The ground rules also prevent the interrogator from basing their decision on the output of some device specifically constructed to differentiate between God’s reality and the MBVE.

            As anyone who has played commercial video games in the local amusement center can attest, the greatest advances in the ability of a digital computer to generate virtual environments appears to be in the realm of projecting realistic visual images.  Close behind those advances are in the area of playing realistic (even 3-dimensional) sound.  Without wider experience, the casual reader may fall under the assumption that presentation of visual and aural information is the complete extent of a modern computer to stimulate human senses.  This is absolutely not so, as recently (circa 2001), computer peripheral devices have become available that give the user some type of physical force feedback based on the actions they perform with their computer (Immersion Corporation, 2001).

            It therefore appears that, of the five classical Aristotelian senses, a modern personal computer can stimulate the senses of sight, hearing, and touch.  However, even this is not the full extent of such a machine’s ability to immerse its user in a virtual environment.  A computer peripheral device was recently developed that can generate odors in support of some computer-based task (DigiScents, 2001).  This type of peripheral device was followed rapidly by development of a device that can generate various different tastes (TriSenx, 2001).  We therefore have available within our present MBVE, the basic ability to stimulate the senses of sight, hearing, touch, smell, and taste.  Although there are several other human senses that are generally accepted by physiologists (the senses of: joint extension, body position, balance, thirst, hunger, fatigue, pain, etc.), these are not first-order outcomes of interactions with one’s environment, and will therefore be discarded from consideration.  I argue without support that the general framework of investigation developed in this essay will be valid regardless of the inclusion or omission of such second-order senses.

            With this background, we can begin to delve into the question “Can machines create reality?” by replacing it the surrogate questions mentioned above.  In a manner similar to Turing, I will enumerate and anticipate some arguments against my prediction that a machine will someday exist that can successfully play the RIG.  I shall follow the format of presenting the objections to legitimizing machine-based intelligence, as developed by Turing in his original paper, and then examining how each objection applies to MBVEs.  I shall then follow with a section exploring objections that did/do not apply to Turing’s situation, but that apply to MBVEs.

            (1) The Theological Objection

            In Turing’s document, this section examined the argument that only God can bestow a soul, and that He only gives them to human beings.  Only beings with souls can be considered intelligent, hence only human beings are intelligent.  Machines are created by human beings; ergo machines have no souls.  Therefore, no machine can ever be intelligent.

            The application of this objection to MBVEs follows a similar vein.  Adherents to a theological perspective will argue that only God can truly create anything, and in particular, only God can create a world (or a universe of worlds).  This specific point is described in the fundamental writings of almost all religions.  The argument follows that virtual worlds are created by machines (under the control of human beings); ergo they have no “stamp of the Creator” on them.  Therefore, they can never be “reality”.

            Turing avoids the problems that might arise in comparing the details of one religion vis a vis another, but instead focuses on this apparent limitation placed on God.  That is, God is supposedly all-powerful, and if He wished, He could indeed place his “stamp” on whatever He chose, giving souls to any object at all; therefore, He could give “God’s legitimacy” to a virtual world.  As does Turing, I choose to avoid trying to apply religious arguments and tenets to situations that are more appropriate to scientific probing.

            In particular, whether or not the hand of God is required to endow intelligence to machines (or legitimacy to virtual worlds) does not further the scientific thinking of either.  I leave this argument and its details to the theologians.

            (2) The “Heads in the Sand” Objection

            Turing describes this argument as the belief that if machines could be constructed that actually were intelligent, we are all doomed, and therefore we should stop thinking about such things.  Similar to his response to the theological objection, he brushes this argument aside, essentially as being non-productive.  It is mainly an outcome of mankind’s egotistical nature requiring the attitude that all other beings must necessarily be “less than us” in some manner.

            With respect to the creation of virtual environments, Turing’s attitude may ultimately be the appropriate one.  However, as movies such as “The Matrix”, http://whatisthematrix.warnerbros.com) have recently described, it may be prudent to fear the ability of machines to construct virtual environments that are so “legitimate” that their human inhabitants are in peril.  The virtual world of “The Matrix” is likely to engulf us only in the far distant future (if at all).  However, many human beings are currently so enraptured by some far less “real” virtual environments that they spend many of their waking (and working) hours completely immersed in them.  Although it might be successfully argued that virtual environments such as the WWW and advanced video games are many orders of magnitude distant from the virtual environment of “The Matrix”, I argue that the difference is one of degree rather than of type.  The general thesis of the movie is the same as that of the addicted WWW-browser, the primary difference being in the level of addiction, not in the general behavior of the participants and their interaction with their particular virtual environment.

            So, unlike Turing, I cannot arbitrarily set aside this argument.  Some virtual environments are “real” enough, if not to fool their inhabitants into thinking of them as “the real world”, to lure their inhabitants into replacing much of their interactions with “the real world” with interactions with the virtual environment.  The real problem of “The Matrix” (spending one’s life with one’s physical body inactive and pampered, while the consciousness is entertained by a good simulation) cannot be ignored; it is already here.  However, this fact does not make this objection a legitimate argument for why a machine could not successfully play the RIG.  I dispense with this objection on such grounds, and merely hope that it does not come to pass.

            (3) The Mathematical Objection

            This objection to the concept that machines may be intelligent was based on the work of Kurt Godel (1931), who proved that discrete-state machines are limited in their powers of logic.  These limits arise in the efforts of a discrete-state machine to answer a variety of different questions.  Godel proved specifically that these limits existed in the machine’s attempts to answer questions about its own processing powers.  In such cases, the limits he proved showed that discrete-state machines would give incorrect answers, or be unable to derive an answer in a finite amount of time.  The underlying cause of these limits is due to the mathematical logic on which discrete-state machines are built, and the limits are unavoidable.  The argument against thinking machines posited by Turing is that humans are not discrete-state machines, and are therefore not bound by such limits.  Hence, no machine can ever truly equal the logical power of the human brain.  (Turing pursues a similar argument in objection #7, related to the continuous nature of the structure of the central nervous system.)

            In separating the logical power of discrete-state machines from their underlying structure, two separate but related arguments arise.  I shall address these as separate objections, as did Turing, and delay until objection #7 to comment on the difference in the underlying structure of God’s reality versus that of a MBVE.

            The application of this objection to the RIG is that the MBVE is created and maintained by a discrete-state machine, and therefore the “logic” of the environment therein will be limited to what can be performed by a discrete-state machine.  God’s reality, on the other hand, is not based on a discrete-state machine (apparently), and therefore is not bound by the limits placed on the MBVE through its reliance on a discrete-state machine.

            In the case of environments, “logic” appears to us through internally consistent laws of physics.  Although in God’s reality, we may not know them all, we do know many and suspect the existence of others.  In the case of a MBVE, we would expect that a similar set of laws would exist and that they would also be internally consistent.  Therefore, to appear realistic, the MBVE must be bound such a set of laws.  It is difficult to guess whether the laws of physics would appear differently to us if God’s reality were controlled by a very high-speed discrete-state machine rather than its apparently continuous process(or).

            The rebuttal to this objection is that the objection itself may apply equally well to God’s reality.  That is, we do not know if the underlying process that controls God’s reality is discrete-state or continuous.  The MBVE must, by necessity, use a discrete-state processor, so all that is necessary is to use that discrete-state processor to model God’s reality to the extent that we ourselves are currently able to interrogate it.  While this is not likely to be computationally feasible, it does indicate that this objection is not a theoretical limit on the ability of a MBVE to successfully play the RIG.

             (4) The Argument from Consciousness

            This objection was addressed by Turing to speak to the contention that no machine has true consciousness.  Such an argument states that machines cannot be considered intelligent until they can not only perform acts that require or stir emotions, but know that they do so [author’s emphasis].  Turing rejects the solipsist argument (the only way to know if a machine is truly thinking is to “be” that machine), and argues that it is not necessary for his test to operate correctly.

            This argument as applied to MBVEs might approximate the belief in some religions that the earth/universe does indeed have a consciousness.  Although most Western religions do not adhere to the concept of an “Earth Mother”, it is accepted and espoused by some religions.

            From the perspective of religions that believe in a reality that does have a consciousness, a machine could not create reality since that reality would not have a consciousness (unless all realities did have a consciousness, but this is a fairly untenable position to argue).  Therefore, God’s reality is the only one that has a consciousness, and therefore no MBVE can be a legitimate reality.

            I adopt Turing’s defense, arguing that presence or absence of consciousness does not void the utility of the RIG.  Conditions of consciousness placed on the MBVE may differentiate it in this respect from God’s reality, but will not impact its ability to successfully play the RIG.

            (5) Arguments from Various Disabilities

            In Turing’s original paper, this section is used to frame the argument that machines cannot be considered intelligent until they satisfy some specific, but typically changing, criterion.  These criteria run the gamut of actions normally considered to be those only performed by humans, such as: fall in love, be self-conscious (Turing specifically mentions that some of these criteria are based on objection #4, related to a machine’s lack of consciousness), have a sense of humor, and so on.  Turing addresses this argument by noting that machines have specific talents, and humans others.  Simply because we have observed that machines do not appear to have the same talents as humans does not mean that they would fail at his IIG.  He also notes that many of these criteria may be satisfied with more powerful machines.  Turing, for example, specifically mentions the possibility of a machine failing at the imitation game because it might answer complex mathematical calculation questions more quickly than could a human.  This difference would obviously be addressed by clever programming that would occasionally give an incorrect answer, or take extra time before answering.

            The relationship between this objection and a machine’s ability to create a virtual environment is largely similar to Turing’s discussion.  That is, to say that a machine-created reality is not “real” until that reality satisfies some particular criterion (it can fool a person of average intellect, it has a visual resolution of some certain level, it stimulates all human senses, and so on) fails for the same reasons.  Merely because we have all interacted with God’s reality and many of us have interacted with MBVEs, and we can all tell the difference, does not mean that we would never be confused by the two.

            Even Turing’s comments on a machine answering complex mathematical calculations too quickly and accurately, and the requisite clever programming to overcome it, apply to the creation of virtual worlds.  For example, it might be possible to create a MBVE that was “too good”, and would hence allow the interrogator of the RIG to unmask it by its perfection.  The clever programmer of such a MBVE would certainly inject some appropriate level of imperfection to fool such a wise and perceptive interrogator.

            (6) Lady Lovelace’s Objection

            This section of Turing’s original paper describes the objection that machines do not create anything new.  Machines can only repeat or possibly extrapolate from the knowledge that their programmer gave them at their inception.  Turing specifically mentions Hartree’s comment on the possibility of a machine developing a “conditioned reflex, which would serve as the basis for ‘learning’”.  This does not directly lead to the likelihood that such a machine would be able to create something new, but certainly guides our thinking in that direction.  Turing rephrases such arguments into the form that machines do not “take us by surprise”, which he then refutes.  This direction of logic leads to a discussion on whether the surprise is generated by some creative thought of the entity being surprised or the entity doing the surprising.

            This objection, when applied to MBVEs, leads directly to the realm of Darwinian evolution.  That is, a virtual environment created and maintained by a machine can only present a reality with parameters that were programmed (or were “extrapolatable”) by the original creator of the MBVE.  Nothing “new” would happen.  No new species would spontaneously arise.  No new societies would emerge.   Entities and actions that were not present in the first moment of the existence of the MBVE could not later come to pass.

            To overcome this objection, the programmer of a virtual reality, in order to make it indistinguishable from God’s reality, would by necessity, have to program in some process of evolution.  The Darwinian evolution of God’s reality allows new species to be born.  The societal corollary of Darwinian evolution allows new societal forms and formats to rise from and beside past societal structures.

            Such “surprises” may not occur in the timeframe that Turing was describing (spontaneous events executed by machines).  However, a MBVE with the possibility of evolution could indeed lead to surprising events in a larger timeframe.  In fact, the Darwinian evolution of God’s reality has lead to many surprises (e.g. - the discovery of supposedly extinct species like the coelacanth) even to those quite learned in the fields of scientific evolution.

            I argue, therefore, that an application in the MBVE of the Darwinian evolution of God’s reality might allow “new things to happen”.  Also, I defer until objection #14, the consideration of spontaneity or surprises in very short time frames (less than the many generations required for evidence of Darwinian evolution to appear).

            (7) Argument from Continuity in the Nervous System

            In this section, Turing discusses the objection to machine intelligence that arises from the difference in structure between the human brain and a discrete-state machine.  As described in objection #3 above, discrete-state machines do not operate in a continuous fashion, whereas the human brain does.  While objection #3 dealt with the characteristics (and limits) of the logic available to discrete-state machines, the objection described herein deals with the physical structure of the thinking entity.

            Turing brushes aside any objection from this argument by stating his opinion that, in the IIG, “continuous-processing” machines would likely perform similarly to discrete-state machines.  Therefore, one can dis-regard the structural nature by which an entity (human or machine) comes by its intelligence.  What mattered to Turing was the intelligence that was displayed, not the circumstances from which it arose.

            When applied to MBVEs, this argument takes on a similar form.  It appears that God’s reality is continuous (Deutsch, 1997, p. 267, “ . . . time is a continuum.”), although there have been arguments made that both space and time are actually quantized (“The incompatibility of general relativity and quantum mechanics – which would become apparent only on sub-Planck-scale distances – is avoided in a universe that has a lower limit on the distances that can be accessed, or even said to exist, in the conventional sense”, Greene, 1999).  Certainly, at the level that most of us interact with God’s reality, all of our physical interactions with space and time are of a continuous nature.
            A MBVE, because it is controlled by a discrete-state machine, will “tick” with the speed of the processor of the machine.  There can be no exactly continuous process in such a world.  At its heart will be a quantized time that may be composed of extremely small quanta (clock speeds have recently passed the 1 gigahertz level for commercially available personal computers).  These worlds will travel forward through time with motion that appears to its inhabitants as continuous.  However, if the most precise measurement devices were brought into such a world, it would be apparent that they were at most, moving very quickly through very small time quanta, but not moving continuously.  (This entire argument assumes that the devices that the human uses to interact with the environment are external to their central nervous system.  Deutsch (1997) posits that devices that directly couple with the CNS could in fact, synchronize with and perhaps slow down the “clock speed of the brain”.  In such a case, a discrete-state machine could perfectly emulate a continuous process, as perceived by the interrogator [author’s emphasis].  However, the discrete-state machine, existing in God’s reality, would still not perfectly emulate a continuous process.)

            The matter of continuous space follows similarly.  Notwithstanding the philosophical physics arguments that God’s world is quantized in space, the physicists of our world say that we are living in a world in which space is continuous.  The terrain database that comprises the physical locations in the virtual environment made by a machine, however, will be forever limited by the discrete positions that describe every physical point within that virtual environment.

            My defense against this objection adopts Turing’s defense and adds to it.  I concur with Turing that successfully playing the IIG (or RIG) does not depend on whether the structure of the MBVE matches that of God’s reality (believed to be continuous).  The RIG places no importance on a discrete vs. a continuous space/time fabric.  Further, even though we are daily interrogators of God’s reality, we still don’t know whether it operates with a discrete or continuous space/time fabric.  It apparently doesn’t affect our actions too drastically, and I argue that it will not matter to us when we interact with the MBVE in the circumstances of playing the RIG.

            (8) The Argument from Informality of Behavior

            Turing uses this section to address the concerns of those who see a difference between the laws of behavior that control human beings and the laws that control machines.  There being a difference, machines cannot duplicate the actions of human beings.

            The difference, as the argument from Turing goes, lies in the rules that humans use to guide their behavior.  For example, it is possible to enumerate a list of rules that would guide almost everyone’s behavior almost all of the time.  However, it is not possible to enumerate a list of rules that would guide everyone’s behavior all of the time.  Since the actions of a machine must by definition be guided by a set of rules, it is impossible for a machine to completely emulate the actions of a human being.  As an example, Turing suggests that it is difficult to enumerate the rules governing actions as simple as might occur at a stoplight.  Although one might program into a machine what should occur when the light is red or green, one must also consider what to do when both red and green appear together (perhaps through some malfunction of the stoplight).  In such a case, a human being can spontaneously generate a new rule (and one that might even be fairly robust or appropriate), whereas it would be quite an extraordinary machine that would be able to do so (aside from some randomized action not based on previous experience or the current situation).

            Turing then delves into the difference between “laws of behavior” and “rules of conduct”, and notes that machines may well be better served by being guided by the latter, which are more likely to be adaptable to new situations.  Regardless, he does not give this argument credence, by contending that machines may be constructed that are guided satisfactorily by internal mechanisms that appear to simulate those of humans, but that result in behavior that is not predictable.

            The application of this argument to MBVEs follows thusly.  The laws of behavior that govern God’s reality are not (yet) completely known.  These are the laws of physics.  If it is the case that the laws of physics are not completely known, then a machine-based virtual reality cannot replicate God’s reality because God’s reality follows all laws of physics, known or unknown (as described above in objection #3).

            The purist may argue that a machine-based virtual reality will become equal to God’s reality in this respect when all laws of physics have been discovered.  Although this is logically true, I argue that such a point in time will likely be a very long way off, if it occurs at all.  In fact, Turing himself stated that ‘we know of no circumstances under which we could say, “We have searched enough.  There are no such laws.”’   It therefore appears that the RIG is safe from this direction of attack.

            (9) The Argument from Extrasensory Perception

            Turing even addresses the possibility that the interrogator might come under the influence of such a phenomenon as ESP.  Since it would be widely accepted that no machine could have ESP, then, if evidence of such an occurrence arose during the IIG, the interrogator would be certain to know that the entity displaying ESP was in fact the human and not the machine.  Turing addresses (and generally dismisses) the possibility that the machine may actually be impacted by ESP-like powers.

            There is an interesting corollary between this section of Turing’s paper and the application to MBVEs.  ESP is the existence in God’s reality of intelligence-based events outside of the “normal” laws that govern such events.  Transferring this train of thought to actions in physical environments, we might consider actions such as teleportation and time travel.  The physical laws that govern God’s reality do not necessarily prohibit these types of actions.  It may be that teleportation and time travel are permissible within the laws of physics, but that we are merely incapable with our present knowledge and technologies of performing such acts.

            If such is the case, we have a rebuttal contrary to Turing’s response.  He suggests that the interrogator in the IIG would know the human from the machine in the presence of ESP because that presence emanates only from a human intelligence and never from a machine intelligence.  In the case of the RIG, the opposite occurs.  The interrogator in the RIG would be able to differentiate God’s reality from the MBVE in the presence of teleportation or time travel because such acts can only occur (at least at present) in a MBVE.  The same would hold true for any other “paranormal” physical event that appears to void our known laws of physics, and that we generally agree does not occur in God’s reality.  I therefore dismiss this objection on the basis that it would only arise by specific action of the programmer of the MBVE.

4.         Other Contrary Views not Discussed by Turing

            This section of the essay discusses other objections to the prediction that machines will some day create virtual environments that are indistinguishable from God’s reality.  These objections, however, are ones that are relevant to the RIG, but that did not arise in Turing’s original inspection of machine intelligence.  One fundamental difference between the IIG and the RIG is that the contestants in the former are single entities and the contestants in the latter are worlds full of entities.  This difference will give rise to issues relevant to the RIG but not relevant to the IIG.

            (10) The Arguments of Entropy, Uncertainty, and Chirality

            One aspect of environments that does not apply to machine intelligence is the property called entropy.  Specifically, God’s reality must follow the Second Law of Thermodynamics, and continually descend (as a system) toward disorder or equilibrium.  A MBVE is bound by no such law.  In fact, it would be possible in a virtual environment to directly violate the Second Law of Thermodynamics, and construct a perpetual motion machine of the first or second kind.  The astute interrogator might attempt to build such a device, and if successful, note that the environment in which they were working must therefore be the MBVE.

            Related to this argument is one concerning the Uncertainty Principle.  An astute interrogator might attempt to simultaneously determine both the position and momentum of, for example, an electron.  If they were successful, they would know for a fact that they were in the MBVE, as within God’s reality, we are unable to carry out such a feat.

            Also related to this general argument is the concept of chirality.  Chirality refers to the characteristic of left-right symmetry that occurs throughout almost our entire known universe.  Specifically, the principle of chirality states that any physical process that occurs is indistinguishable from that same process when viewed in a mirror (which exchanges the left and right halves of an image).  It turns out that almost, but not all, physical processes (that occur in God’s reality) are bound by the law of chirality.  Therefore, a motivated interrogator could set up one of the few physical experiments that violates chirality and note whether or not the law of chirality held true.  If chirality was violated, then the experiment must have taken place in God’s reality; if it was not, it must have taken place in the MBVE.

            I dispel these arguments on the basis that, while a discrete-state machine might never be able to exactly replicate the specific processes that exist in God’s reality, for all intents and purposes, it could.  I further argue, based on the ground rules of the RIG, that this does not void the validity of the RIG for measuring MBVEs.

            Also, although it is true that a programmer could have built the laws of Thermodynamics and Uncertainty into the functioning of the MBVE, they would also have to build in the principle of chirality and its exceptions.  While this is true, it does illustrate the extent that a programmer would have to go in order to exactly replicate God’s reality.  In any case, the ground rules of the RIG suggest that, while these differences might exist, they are highly unlikely to prevent a MBVE from successfully playing the RIG.

            (11) The Argument of Ecology

            Since the contenders in the RIG are worlds full of entities, we must consider that God’s reality includes the concept of ecology, or interaction of all of its constituent entities.  The particulars of this argument are as follows: the numerous entities of God’s reality are, by natural cause, forced to interact with each other.  Those entities are so numerous, and the interactions so complex, that no MBVE would ever be able to replicate them.

            This is an argument of concern, as an interrogator in the MBVE would rapidly note the lack of interaction of the entities therein.  This argument, similar to several others in this section, hinges on the difference between the theoretical difference and the practical difference of constructing a MBVE.  While God’s reality contains an incredible number of entities that interact with each other, it is unlikely that the casual interrogator (if bound by the ground rules of the RIG) would notice the lack of a significant number of these.  Therefore, it is my contention that, while this argument is theoretically correct, for practical purposes, it is unlikely to prevent a MBVE from being a serious contender in the RIG.

            (12) The Argument of Chaotic Processes

            An argument related to that of ecology is that of chaos.  This appears to be a second-order complication of the principles of ecology.  As described in argument #11, interactions of the ecological kind (generating and applying the interactions between all participating entities of a world) are likely to be impossible for a MBVE to calculate and implement.  However, it is at least an order of magnitude more difficult for such a machine to calculate and implement the chaotic effects of such a system.  That is, the principles of chaos theory (in which very small changes to the exact initial state of the system will lead to large differences in a later state of the system) will hold automatically in God’s reality, but will be impossible to execute in any MBVE.

            I again use the difference between the theoretical and practical perspective in opposing this argument.  If we use the ground rules of Turing, the theoretical limitations of this argument, while strictly true, will for practical purposes be unlikely to prevent a MBVE from successfully playing the RIG.  In a manner similar to that of ecology, it is unlikely that an interrogator will be able to differentiate God’s reality from a MBVE, based solely on the adherence to chaos theory in the one and not in the other.

            (13) The Argument of Entity Structures

            In a manner similar to chaos theory, some might argue that the existence of fractals (entities with detailed structures at all levels of examination) would allow an interrogator to distinguish God’s reality from the MBVE.  Whereas chaos theory applies to the actions within a system, fractals apply to the structure of entities within a system.  In both cases, the computing power required to exactly implement either prohibits the MBVE from ever perfectly replicating God’s reality.

            However, the defense against the argument of chaos theory applies even more appropriately to the argument of fractals.  The amount of computing power required to generate the appropriate fractal characteristics of the entities in the MBVE is beyond anything likely to exist in the near future and perhaps forever.  That limitation does not mean that a MBVE could not successfully play the RIG.  While the interrogator bound by Turing’s ground rules is not likely to distinguish God’s reality from the MBVE based purely on the existence of chaos theory (or lack thereof), they are in the same predicament with respect to the existence of fractals (or lack thereof).  No current (or contemplated) discrete-state machine can correctly implement chaos theory or fractals in a virtual environment; that does not mean that such an environment would necessarily fail in playing the RIG due strictly to that flaw.

            (14) The Argument of Randomness

            This argument arises from those critics who notice that God’s world is apparently at the mercy of various truly random events.  This is a separate discussion from Turing’s original objection #6 (Lady Lovelace’s objection that machine’s do not have the facility for creativity), since creativity implies intent on the part of some entity.  Randomness has no such implication of intent (ignoring the possibility of consciousness of the environment).  Random events appear to occur without the instigation of any particular entity.

            Therefore, those who propose this argument against the ability of a MBVE to succeed at the RIG, state that the interrogator would be able to discriminate God’s reality from the MBVE by noting that no random events occur within the latter.  One would even be able to differentiate a “replay” in the MBVE of a set of truly random events that had occurred and been recorded earlier in God’s reality.  For example, a (sufficiently long) truly random set of dice rolls that occurred in God’s reality would never be perfectly reproduced anywhere, even in God’s reality.  Therefore, if the interrogator had prior knowledge of this set of dice rolls, and saw that same exact set of dice rolls occur in any environment, they would know for certainty that they were not observing God’s reality.

            The effect of randomness and associated random events is primarily a function of the ability of the programmer of the MBVE.  Perfect random number generators do not exist, and therefore perfectly random events would not occur within the MBVE.  Even replaying perfectly random events in a MBVE that were recorded from God’s reality would not suffice.  However, I argue that the interrogator bound by Turing’s ground rules would not be able to distinguish between the truly random events of God’s reality and the pseudo-random (or previously recorded truly random) events that could be generated in the MBVE.

            (15) The Argument of Self-Containment

            This objection concerns the possibility that inhabitants of a reality may look for clues indicating an “outside” reality.  As Deutsch (1997, p. 139) points out, “The rendered environment would also have to be such that no explanations of anything inside would ever require anyone to postulate an outside.  The environment, in other words, would have to be self-contained as regards explanations”.  Particular to a MBVE, the objection is that it could not be real if it contained evidence that it only existed as a “subset” of some “higher-level” reality.  With respect to the RIG, this means that the MBVE should not contain evidence that it was actually a virtual environment that was originally constructed within God’s reality.  If so, the interrogator would know that the MBVE was not “real”.

            An interesting corollary to this objection is that it actually occurs in God’s reality.  This happens when physics experiments (such as single- and double-slit light interference patterns) yield results that cannot be explained by the known laws of physics in God’s reality.  Such experimental results suggest that there may be other realities besides our own (other multi-verses parallel to our own universe) that may peripherally impinge on our own universe (Deutsch, 1997).  Without the results of these (and other) experiments, we have little reason, other than suspicion, to suspect that there are universes other than our own.  With the repeatable and empirical results of these experiments, we are forced to consider the possibility of other universes.

            The ultimate impact of this objection on the RIG is that any MBVE must be “self-contained” in the respect that an interrogator in the MBVE would not have reason to suspect that they were actually in an MBVE.  However, since the concept of “self-containment” apparently does not even apply to God’s reality, it appears that this objection is not likely to prohibit a MBVE from successfully playing the RIG.

5.         Evolving Machines

            Turing spent considerable time discussing the methods by which one might create a machine that would have a chance at successfully playing the IIG.  Many of the same suggestions apply with the same force in the construction of a machine that would have a chance at successfully playing the RIG.  In particular, he argues against the “whole birth” of such a machine.  This method suggests that one create a machine that would have, at its conception, all of the facts and rules necessary to play the IIG.  Turing argues that it would be difficult to create such a machine, since we as human beings hardly know exactly how to describe the human mind in the first place.  Without such knowledge, it is essentially impossible to replicate that knowledge in a machine.  The same argument may or may not hold in the construction of a machine that would play the RIG.  If the argument did hold, the construction method would be to create a world, complete in all its glory, from the moment of its conception.  I argue the contrary.  Considering all of the difficulties in creating a “perfect” MBVE, as described above, it appears that it would be impossible to construct a machine that would contain, at its inception, all of the elements necessary to replicate God’s reality.

            Turing then argues that perhaps a better way to construct a machine to play the IIG would be to follow the human development process.  That is, he argues that humans are able to play the IIG quite successfully, and so we should develop our machine in the same way that humans are developed.  As a species, humans have been guided by evolution to attain our current form.  As individuals, humans are guided by (as Turing separates them) education and experience.

            Such a process would result in developing a machine that entered its existence with a relatively small amount of knowledge, but a large capacity for gaining knowledge (as a human exists at birth).  One would then introduce the machine to the world, allowing it to learn from both education (guided exposure to knowledge) and experience (unguided exposure to knowledge).  After examining the growth of such a machine, and the level of its ability to play the IIG, one could use the processes of genetics to create the next generation of a machine to go through a similar process.  Evolution of such machines over time would guide their general characteristics toward greater success at playing the IIG.  This sequence of learning, genetics, and evolution, would drive the process of developing a machine with enough intelligence to stand its ground in playing the IIG.

            A similar process could be used for developing a machine that would successfully play the RIG.  However, due to the greater complexity in the rules for controlling a virtual world (as opposed to the rules for controlling an intelligent mind), I believe that such a process would not ultimately be successful.  It would also suffer from the flaw that the environment would not necessarily evolve into a world that is similar to God’s reality.  Such an environment might be internally consistent and in agreement with the general rules that govern God’s reality, but contain entities that are completely foreign to us.  It might be very interesting to investigate, but it would take only moments for an interrogator to recognize the contrast with God’s reality.

            I therefore argue for a combination of “evolution plus a guiding hand” for developing a machine used to create a virtual environment to rival God’s reality.  The resulting process might then follow steps similar to these:

            1. Derive, program, and then implement the primary physical laws that will govern the environment (including the laws that guide evolution).

            2. Create an environment more simple than our own, perhaps with only the most simple and relevant entities (land, water, air, plants, animals) already in existence.

            3. Allow the environment to evolve.

            4. After some suitable amount of time, allow the environment to play the RIG.

            5. If the environment fails, create a modified environment using genetic processes to modify the laws of step 1 and the entities of step 2.

            6. Go to step 3.

            While I am not confident that such a process will shortly give rise to machines that can successfully play the RIG, I believe that it is a useful starting point.  The regular advances that we see in the technologies of computer hardware and software will certainly smooth the way for applying such a process.

            I shall end as Turing did, by noting that “We can only see a short distance ahead, but we can see plenty there that needs to be done”.

REFERENCES

Deutsch, David.  1997.  The Fabric of Reality: The Science of Parallel Universes – and Its ImplicationsAllen Lane: The Penguin Press, New York, NY.

DigiScents.  2001.  http://www.digiscents.com

Gödel, Kurt.  Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, IMonatshefte für Math. und Phys., (1931), 173-189.

Greene, Brian.  1999.  The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate TheoryW.W. Norton & Company, New York, NY.

Immersion Corporation.  2001.  http://www.immersion.com

Saygin, A. P., Cicekli, I. Akman, V.  (2001).  Turing Test: 50 years laterMinds and Machines, 10 (4).

TriSenx.  2001.  http://www.trisenx.com

Turing, Alan.  Computing machinery and intelligenceMind 59 (1950), 433-460.