21 July 2018

"Quantum Computing since Democritus" by Scott Aaronson (2013)

1.

Roger Penrose asserts that humans have access to truth that is beyond that which is accessible to computers, which run algorithms. That not all truth is accessible to algorithms is an implication of Kurt Gödel s incompleteness theorem, which---as long as one is capable of imagining a computer---follows from Alan Turning’s halting problem.

Imagine a computer (or, rather, a Turing machine, with infinite memory). Turing’s claim is that there exists no programme P that, for any programme Q capable of taking itself as an input, returns P(Q)=‘halts’ if Q(Q) halts in finite time and returns P(Q)=‘loops’ if Q(Q) runs forever. (Instead of executing Q(Q), P analyses the code of Q in order to determine whether Q(Q) would terminate in finite time. By assumption, P performs this analysis in finite time.)

Suppose, by contradiction, that the described programme P can be written. Define the programme R such that, for any programme Q capable of taking itself as an input, R(Q) runs forever if P(Q)=‘halts,’ and R(Q) halts if P(Q)=‘loops.’ Given P, R is an elementary programme to write.

Now, take R as an input for itself. By the definition of R, R(R) runs forever if P(R)=‘halts.’ Repeat: R(R) runs forever if R(R) halts (i.e., does not run forever). This is a contradiction. Hence, the hypothesised code analyser P cannot exist.

Gödel’s impossibility says that there exists no consistent, computable system (i.e., a computer equipped with a programming language) such that any statement made in the language of this system can be either proved or disproved within that system. Turing’s halting problem proves the claim.

Indeed, for an arbitrary programme X, take the statement “X halts.” Well, X either halts or it does not. A brute force way to prove or disprove the statement is to write a programme, call it P, that runs through all possible proofs and thereby either finds a proof or a disproof of the statement “X halts.” That is, P determines whether X halts. For any X. But we have just argued, from Turing's halting problem, that no such P exists. Thus, one can construct a statement (here, “X halts,” for some X) such that the entire set of proofs contains neither a proof nor a disproof of that statement. Q.E.D.

To summarise, as shown by Gödel and Turing, even most advanced computers cannot prove certain truths. Penrose postulates that men can; they have access to insight about these truths, probably thanks to some hitherto unexamined quantum mechanism processes in human brain (which are also responsible for the freedom---or at least unpredictability---of one's will). This inaccessible-to-the-computer insight is the essence of consciousness (according to Penrose). What does this direct insight feel like? It feels like understanding a mathematical result as opposed to understanding its proof. One can understand a proof without really understanding the result; alternatively, one can be convinced of the truth of the result while still groping for a proof.

Unfortunately, one is often wrong; one’s conjectures are often false. How should one then interpret the faulty “insight”? Is consciousness to blame for the faulty “understanding” of the result? Perhaps, only the brilliant people, such as Penrose, who are rarely wrong, can postulate the superiority of human brain over the Turing machine and believe that human consciousness equals the insight minus the proof, equals the difference between the noncomputable brain and the computable machine.

There is no evidence that human brain is anything more than a computer. (Sometimes, it computes astonishingly fast, but only because evolution has already been computing for 3.8 billion years.) It probably is a computer, but there is no dispositive evidence to this effect. So, what should one believe? If one is an active scientist, one may toss a coin: the diversity of beliefs and research directions in science is commendable. If one is a layman, one, too, has a choice. The belief that one is but a computer may be conducive to "better" morality, which leads to cooperative societal outcomes. Indeed, if one’s standards are low enough to grant consciousness to an advanced computer, then the chances are that these standards will also be low enough to grant consciousness to one’s political opponent, to a person of a different ethnicity, and to a mentally ill individual. This sense of shared humanity is likely to lead to a cooperative ethos that would make life better for all. Alternatively, one may choose to believe that humans are special. I see no profit in such a belief except that, to some, it may feel good.

2.

Turing’s imitation game (from his 1950 paper "Computing Machinery and Intelligence") provides a working definition of being human. First, being human is not about the physical form itself but about how this form can be distilled into a tweet, a blog post, an email conversation, a screenplay, or a book. (This idea has become the essence of the entire human rights movement.) Second, human intelligence generates surprising (to another human) insights. Third, there is nothing sacred about human imperfection. (If there were, we would have castigated geniuses instead of worshiping them. Freeman Dyson’s definition “God is what mind becomes when it passes beyond the scale of our comprehension” applies equally well to a brilliant scientist as it does to a choreographer, a film director, a soccer player, or a computer.) Fourth, a property of a creative mind (possessed only by a “smallish proportion” of humans, according to Turing) is supercriticality. The analogy here is with a nuclear chain reaction. A supercritical mind (as opposed to a subcritical one) responds to an idea with another idea, maybe with two, or maybe with an entire theory.

Presciently, Turing warns against fetishising consciousness. Or else, one may go all the way to assuming that the only way to ascertain that A is thinking is to be A and experience it for oneself. This new convention would not be useful: “Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.”

Turing contemplates a variation of the imitation game in which the machine imitates someone other than a human, say, another machine. Or, let us say, a cat. If the machine succeeds to cat’s satisfaction, then we shall have to grant the machine its catness, just as the cat has done. Then, if the machine is accepted as its own by various species, it should pass for a rather universal kind of God.

Instead of picking a fight with the religious dogma, Turing points out that machine intelligence is not inconsistent with it: “In attempting to construct such machines we should not be irreverently usurping His [God’s] power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.”

Turing’s paper is a master class in writing: “The reader will have anticipated that I have no very convincing arguments of a positive nature to support my views. If I had I should not have taken such pains to point out the fallacies in contrary views.” Or “I do not think that this argument is sufficiently substantial to require refutation. Consolation would be more appropriate: perhaps this should be sought in the transmigration of souls.”

3.

The hypothesis P≠NP asserts that a problem whose solution can be verified in polynomial time (a problem in NP) need not be solvable in polynomial time (a problem in P). That is, solutions to some problems are easy to see when presented but are hard to find. The hypothesis appears to be self-evident from one's quotidian experience but its formal proof has been illusive. Because any problem in P is also in NP, to prove P≠NP one must come up with an example of a problem in NP that is not in P.

Now, it turns out that proving P=NP could also be accomplished by an example (instead of showing that every problem in NP is also in P). It has been shown that many a problem in NP is hard in the sense that all problems in NP can be efficiently reduced to it. So, in the unlikely event that one could efficiently solve any of the many well-known hard (so-called NP-complete) problems in NP, one would have efficiently solved them all. That is, one example of an NP-complete problem that is also in P would have sufficed both to show P=NP and to provide an algorithm for solving all problems in NP. Thus, even though the odds are against P=NP, the stakes are high (and the spectre is looming tall).

We choose to live in the physical world instead of living in our own minds because our minds compute slowly. The physical world computes the models that we are interested in fast. Without cognitive limitations, we all would have lived in our heads, each of us simulating a world of his liking. Now that technology rather successfully aids one living in one’s head, we shall need the outside world less. That may mean peace. In the meantime, the complex, incomprehensible world requires trust and simple rules of conduct.