QUANTUM INFORMATION PROCESSING
What is a quantum computer?
In a nutshell, a quantum computer is a machine that is capable of encoding information into quantum states and compute by performing quantum operations on those states. A quantum computer is capable of harnessing massive amounts of processing power by taking advantage of quantum mechanical effects such as superposition, entanglement, and especially interference, to solve problems that would otherwise take conventional computers an exorbitant amount of time to crack. According to one of its pioneers, David Deutsch, quantum computers can “perform new types of computation that would be impossible, even in principle, on any classical computer. Quantum computation is therefore nothing less than a distinctly new way of harnessing nature.”
What is it used for?
However, it is inappropriate to envision quantum computers as the smaller and faster incarnations of today’s digital computers. Quantum computers work according to principles of quantum mechanics (which are totally different from the principles behind conventional computers) and these machines can potentially solve problems that are intractable for conventional computers.
So it is important to first understand in what ways quantum computers are superior. A great example is prime factorization. While it is a simple task to multiply two prime numbers, it is a very hard to compute the prime factors of a large number. We have seen in Section 17.1 that modern cryptography is based on prime factorization where two large prime numbers are multiplied to create a security key. Unlocking that key is a very difficult task and may take a classical computer a very long time to decipher simply because it involves factoring of very large numbers. Today, Internet security is grounded on an encryption scheme called RSA encryption (named after Rivest, Shamir, and Adleman) based on prime factorization.
In 1992, the M.I.T. mathematician Peter Shor invented an algorithm that could potentially run on a quantum computer to quickly find the prime factors of a very large number. Shor’s algorithm offered a huge time advantage. Calculations that a normal computer would possibly take longer than the age of the universe, would take a sufficiently powerful quantum computer using Shor’s algorithm a reasonable period of time to complete (see Section entitled ‘Quantum Algorithms’ below).
Another potential use of quantum computers is database searching, say, finding a particular phone number in a large phone book, ordered alphabetically by individual names but not by phone numbers. Operationally, this would mean going through the whole phone book and looking at every entry. A normal computer will have to check every single record which is a very time consuming process. In contrast, quantum algorithms need only the square root of the time, which is a considerable amount of time saving for large databases. In 1996, Lov Grover of Bell Labs devised such an algorithm that used quantum computers to search an unsorted database considerably faster than a conventional computer.
Today, quantum computers exist (we will talk about the current prototypes of quantum computers in Section entitled ‘Building a Quantum Computer’) in a handful of laboratories around the world, but the engineering challenges of mass producing them are beyond our technical means. Today’s machines can solve simple problems—for example, finding the prime factors of a number as big as 56,153. But quantum computing offers immense possibilities. Although in its infancy today, it has the potential to revolutionize computer technology since the invention of the microprocessor. While these remarkable machines will probably never replace normal computers, in some areas they are vastly superior than their classical counterparts.
History of Quantum Computing
Classical computers are based on the abstract universal Turing machine, the prototype of all classical computers that was devised in 1936 by mathematician Alan Turing (Turing, 1936). Known as the Church-Turing thesis, this ground breaking idea defined an ‘algorithm’ that modeled a human calculation. A calculation (or computation) is actually a mental processes executed by the human mind.
In 1982, Richard Feynman (Feynman R. , 1986) pointed out that a computer exploiting the laws of quantum mechanics could potentially solve problems which are intractable with a classical computer. Feynman noted that the computational effort required by classical computers to simulate the dynamics of a quantum system would grow exponentially with the size of the system under investigation. Instead, a suitably controlled quantum system could be used to mimic and therefore simulate the dynamics of the same quantum system much more efficiently. He proposed the idea of the quantum Turing machine, or the universal quantum computer, as a quantum theoretical analog of the original Turing machine. Similar ideas were developed independently by Yuri Manin in 1980.
In 1984 David Albert described a ‘self-measuring quantum automaton’ that performed tasks that no classical computer could simulate. By instructing the automaton to measure itself, it could obtain ‘subjective’ information that is absolutely inaccessible by measurement from the outside. In 1989 David Deutsch proved that all the computational capabilities of “any finite machine obeying the laws of quantum computation are contained in a single machine”, the universal quantum computer. Such a computer could be built from the quantum equivalent of the Toffoli gate (logic gates are discussed in Section 17.3.6) by adding a few extra operations that could bring about linear superpositions of 0 and 1 states. The Church-Turing-Deutsch thesis, as it is now known, proclaims that every physically realized computation can be executed using a quantum Turing machine.
How Quantum Computers Work
The building block of all digital (classical) computation is the bit—single units of information that can either be 1 or 0. In contrast, a quantum computer stores information in “qubits.” A qubit can be expressed as certain quantum properties of trapped particles such as the two level quantum system comprising of the up and down spin in a magnetic field, or the horizontal or vertical polarization of a single photon. Furthermore, qubits can be in two states at once, both 0 and 1, or in any proportions of both the states simultaneously12, until a measurement or observation is made. So prior to making a measurement, the qubit is in a state of quantum superposition of probabilities of 0 and 1 whose outcome cannot be predicted while unobserved. But as soon as a qubit is measured or observed, say, by sending the photon through a filter, the photon has to “decide” to be either vertically or horizontally polarized, resulting in collapsing the qubit into one of the definite states.
Classical bits can be in one of 24 or 16 possible configurations out of which only one can be used. Qubits in superposition, however, can be in all of those 16 combinations at once. This number grows exponentially with the addition of each extra qubit. It turns that a n qubit quantum computer can store numbers. Hence 20 qubits can store a million values in parallel. A quantum computer can perform multiple operations simultaneously, rather than sequentially like a classical computer, since data in a quantum computer can exist in multiple states and the intertwining of quantum states exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. It is quantum superposition, the archetypal signature of quantum mechanics—that has no classical analog—which gives the quantum computer its ability to rapidly solve certain classes of complex problems (see above).
Each qubit can be entangled with the other qubits and the input that a quantum computer receives, is instantly spread among the entangled qubits. But this information cannot be accessed while it is held among the entangled particles. Once observed, qubits can no longer remain in a state of entanglement, or of superposition. This ruins the quantum computer’s ability to calculate. Charles Bennett described quantum information as being “like the information of a dream—we can’t show it to others, and when we try to describe it we change the memory of it.”
So, how do we extract the result of a calculation performed by a quantum computer?
Programming a quantum computer is very different than programming a traditional computer. It is important to note that a quantum computer is probabilistic rather than deterministic, implying that a quantum computer will return many very good answers simultaneously—not just the optimal solution or a single answer, but also other alternatives to choose from. The ability to perform multiple (billions) computations simultaneously using superposition is known as quantum parallelism. The crucial difference with today’s parallel computer is, a quantum computer does not need multiple processors operating concurrently—all of its calculations are run on the same hardware.
Another way of explaining the inner workings of quantum computers is popularized by David Deutsch, one of the pioneers of quantum computing. His explanation13 is based on Hugh Everett’s multiple universe interpretation of quantum mechanics—a quantum computer’s calculations occur in alternate universes, where entangled particles act as agents of communication between the different universes, aiding in information sharing and assembling of the results. Regarding Shor’s algorithm, Deutsch writes: “When we run such an algorithm, countless instances of us are also running it in other universes. The computer then differentiates some of those universes (by creating a superposition) and as a result they perform part of the computation on a huge variety of different inputs. Later, those values affect each other, and thereby all contribute to the final answer…”
But how does one access of these independent results from these multiple universes?
This is where quantum interference plays a critical role. The various results from the alternate universes can constructively and destructively interfere with each other to produce a measurable solution. The job of quantum algorithms is to consider all the possibilities simultaneously to determine the optimal solution by combining the results into something that is both meaningful and can be measured according to the laws of quantum mechanics.
The first quantum algorithm was invented by David Deutsch in 1992. Now labelled as the Deutsch-Jozsa algorithm, it could decide whether a given Boolean function is constant or balanced in a single evaluation of the function, while a classical computer would need at least two evaluations. The Deutsch-Jozsa algorithm is of little practical use today, but “it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm”.
There was a time when the interest in quantum computers was largely pedagogical. This was about to change in the 1990s, when research in quantum computing received a tremendous boost with Peter Shor’s invention a quantum algorithm that solved an important problem: how to factor a large integer number, a problem that is intractable with a classical computer. According to computer scientist Donald Knuth, the factorization of a 250-digit number would take over a million years in a network consisting of a million of today’s digital computers working in tandem and using the most efficient factorization methods. Shor’s algorithm could do it much faster (in polynomial time) on a quantum computer using a mathematical technique called the quantum Fourier transform, or QFT to factor an integer N. This means that current cryptographic techniques will no longer be as invincible when quantum computers will become available for the mass usage.
Lov Grover devised an algorithm that uses quantum computers to search an unsorted database faster than a conventional computer. Being a quantum algorithm, Grover’s algorithm is probabilistic—it gives the correct answers with high probability. By repeated execution of the algorithm, the probability of getting the correct answer can be improved. A linear search in an unsorted database requires O(N) time while Grover’s algorithm takes O(√N) time, thus providing only a quadratic speedup. In contrast, other quantum algorithms can provide exponential speedup but even a quadratic expedition is a considerable time saving for a large database when N is large.
In the previous Sections we saw when information is encoded in quantum states of a system of n particles, vastly more computational states are available compared to the classical case. We also saw how a quantum algorithm takes n “classical” bits as its input, manipulates them so as to create a superposition of their possible states. It then manipulates this exponentially large superposition to obtain the final quantum result. Finally, a quantum algorithm “measures” the result to get (with the appropriate probability distribution) the output bits.
We will now discuss the principles of a computational process, starting with the classical computational process known as the Church Turing model of classical discrete computation. Any classical computation can always be decomposed into a sequence of physical devices known as logic gates that act on only a few bits at a time. A logic gate is the basic building block of digital circuits. It can take one or more Boolean values (i.e., FALSE or TRUE)14 as input and return a single Boolean value as output. A key requirement of the computational process is the ability to use logical functions (logic gates) to perform arithmetic operations.
Classical Logic Gates
There are seven basic logic gates: NOT, AND, NAND, OR, NOR, XNOR, and XNOR (see Table 1). The AND gate, for example, is so named because it acts in the same way as the logical “and” operator. Table 1 shows the circuit symbols and logic combinations for these gates. For the AND gate, the output is “true” when both inputs are “true.” Otherwise, the output is “false.” The relationship between the inputs and the output are captured in the truth tables where A and B represent the inputs and X the output.
Classical computation theory exhibits the remarkable property that it can be formulated in terms of a very small set of classical logical gates, called universal set of gates. For example, one can simulate any arbitrary computational task using only the gates AND, OR, and NOT. This indicates that these three gates are universal for classical Boolean logic and as a further simplification, these three gates can be reduced to a single gate, the NAND gate.
Logic operations described above have a very interesting and far reaching consequence. A quick glance at their symbols shows that all logic gates except the NOT gate erase a bit of information every time a logic operation is performed. Thus, the NAND gate operation, for example, is irreversible because it accepts two inputs but has one output, while the NOT gate is reversible, being its own inverse. Rolf Landauer showed that this loss of one bit of information results in a change of entropy that corresponds to an increase of a miniscule amount of energy, , where is Boltzman’s constant and is the temperature. Bit erasure manifests as dissipation of heat which points to physical irreversibility, meaning that the microscopic physical state of the system cannot be restored to its previous state after an irreversible logic operation is performed.
Computer chip makers have ever since been packing more and more logic gates in smaller and smaller volumes of space. Unfortunately, this produces more heat, which will ultimately limit the feasible packing density and performance unless the energy dissipated by each logic operation can be dramatically reduced. In 1973 Charlie Bennett, at IBM Research, showed that a reversible model of the Turing Machine could perform classical computations reversibly without dissipating energy in every logic operation (logically and thermodynamically reversible) apiece. He showed that any problem that can be simulated on the original irreversible machine could also be simulated on the reversible model without loss of efficiency. But the requirements for reversibility place tight constraints on the types of physical systems and Bennett’s work initiated a search for physical models appropriate for reversible classical computation. So far, nobody has succeeded in building a truly reversible computer and according to Wikipedia “this field of research awaits the detailed development of a high-quality, cost-effective, nearly reversible logic device technology, one that includes highly energy-efficient clocking and synchronization mechanisms.”
It turns out that two-bit logic gates shown in Table 1 are not sufficient for universal reversible computing. In 1981, a genuinely universal three-bit gate was identified by Toffoli for reversible Boolean logic which applies a NOT logic to the third bit if the first two bits are both 1, but otherwise having no effect. The Toffoli gate has important applications for quantum computing and will be discussed in the next Section.
Quantum Logic Gates
This section may be too technical for some readers but I believe that a little mathematics will illuminate the basic concepts more than any number of words. So prepare yourself for little rendezvous with equations, but don’t be disappointed if you cannot catch the basic idea immediately because things will become clear later on. I have included the subtler mathematical portions in the Appendixes for the more mathematically inclined readers.
A quantum gate or quantum logic gate is the quantum analogue of a classical logic and a sequence of quantum gates is a quantum gate array or a quantum circuit. An important point to note is, unlike many classical logic gates, quantum logic gates are reversible. Some universal classical logic gates, such as the Toffoli gate, provide reversibility and can be directly mapped onto quantum logic gates. Quantum logic gates are represented by unitary matrices15.
And just as any classical computation can be decomposed into a sequence of classical logic gates that act on only a few classical bits at a time, any quantum computation can also be decomposed into a sequence of quantum logic gates that act concurrently on only a few qubits. Whereas classical logic gates manipulate the classical bit values, 0 or 1, quantum gates can manipulate quantum states including superpositions of the computational basis states, which are frequently also entangled. Thus the logic gates of quantum computation are considerably more diverse than the logic gates of classical computation. Two of the most important quantum gates are the Hadamard and the Toffoli gates.
Hadamard gate. This gate operates on a single qubit. It is represented by the Hadamard matrix:
is indeed a unitary matrix as the rows are orthogonal. It acts so as to map computational basis states into superposition states and vice versa:
The Hadamard is a deceptively simple looking gate but it harbors a remarkable property that turns out to be of vital importance in quantum computing. Suppose we prepare qubits each in the state. . Then to each qubit, in parallel, its own Hadamard gate is applied, then the state produced is an equal superposition of all the integers in the range 0 to .
where is the “computational basis state” indexed by the binary number that would correspond to the number in base-10 notation.
By applying gates independently to qubits, all prepared initially in state , we can create an -qubit superposition whose component eigenstates are the binary representation of all the integers in the range 0 to . Thus, a superposition containing exponentially many terms can be prepared using only a polynomial number of operations. This superposition trick is used in many quantum algorithms to load a quantum memory register efficiently with an equally weighted superposition of all the numbers it can contain.
Controlled-NOT gate. The controlled-NOT gate, , is a two-qubit system. It flips the second bit if the first bit is 1 and leaves it unchanged otherwise.
The gate has the ability to change the entanglement between two qubits. For example, it transforms an unentangled two-qubit state in an entangled state. It can also do the inverse operation of taking an entangled state to an unentangled one.
Toffoli gate. The quantum analogue of the Toffoli gate, the quantum Toffoli gate, acts on three qubits (Table 2, left), with two controls determining the result on one target. The Toffoli can be described as a “controlled-controlled NOT”: If both controls are “true”, the target is flipped, “false”→“true” or “true”→“false”, while otherwise the target is unchanged. Notice how in this gate the third qubit is flipped if the first two qubits are both 1. This reversible gate can be implemented in quantum logic using a combination of two-qubit controlled-NOTs and single qubit gates, and can be used to construct any arithmetical operation.
Fredkin gate. The quantum Fredkin gate can be used to perform a direct comparison of two sets of qubits (quantum bits) to determine whether they are the same or not. This is not only useful in computing but is an essential feature of some secure quantum communication protocols where the goal is to verify that two strings, or digital signatures, are the same.
|Table 2: The Toffoli (left) and Fredkin (right) gates|
In 1989, David Deutsch generalized the three-bit Toffoli gate and in doing so, was the first to identify a three-qubit gate, called the Deutsch gate, that is universal for quantum logic. In 2002 an important result was obtained by Shi (Shi, 2002), and further investigated by Aharonov. It was shown that the three-qubit Toffoli gate and the one-qubit Hadamard gate together constitute a universal set of quantum gates. Unlike the classical reversible counterpart, the Toffoli gate alone is not sufficient to reproduce the behavior of all quantum gates (meaning that the quantum Toffoli gate alone is not universal). A gate exhibiting a “genuine” quantum character needed to be added, and that gate is the Hadamard gate. This means that these two gates are all that are ever needed for building a quantum computer. Furthermore, a theorem called the Solovay-Kitaev theorem asserts that any universal set of gates can simulate any other universal set efficiently. This is exactly analogous to how, in the classical world, one could build digital circuits either from the AND, OR, and NOT gates, or from the AND and NOT gates, or even from the NAND gate only. So really it does not matter which universal gate-set is chosen!
But all of this comes at a price. Superposition is fragile. A random interaction of a single qubit with the surrounding molecules of the environment can cause the entire network of entangled qubits to decohere or collapse. The ongoing calculation is ruined as each qubit transforms into a digitized classical bit holding a single value: 0 or 1. Even looking at a qubit can cause it to decohere, making the process of obtaining a solution from a quantum computer just as difficult as performing the calculation itself. Thus quantum information must be shielded from all external noise of the surrounding environment.
In classical computers, the inevitable loss of information—this loss of information is not due to the thermodynamic bit loss of irreversible computing, but because of their interactions with the environment—is reduced by designing redundancy into the system. Error-correcting algorithms compare multiple copies of the output and select the most frequent answer, discarding the rest of the data as noise.
The situation is much more complex with quantum computers. Trying to directly compare qubits during an intermediate stage of a calculation to verify if error has occurred may ruin the calculation by destroying a quantum superposition due to the wave function collapse. It is also important to realize that errors in quantum computations are not just discrete bit flips (from to and vice versa) of classical computers, they may be continuous phase errors of the form , for any . Thus errors in quantum computations may seem almost impossible to rectify. Fortunately, the situation is not so bleak after all. Quantum decoherence, according to a threshold theorem (John Preskill), can be rectified as long as decoherence is sufficiently weak. But how weak? According to some error rate estimates, a quantum computer must be able to perform to perfect operations before an error sets in, a very severe constraint indeed. And the trick is to design and implement an algorithm that only measures the errors and not the data or information, thus preserving the superposition that contains the correct answer.
In 1995 Peter Shor devised a quantum-error correction scheme in which phase and bit-flip errors could be detected and corrected by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. The basic idea in this scheme is to use two additional ancillary qubits to encode the state of a qubit , and i.e., the quantum information of one qubit is encoded in an entangled state of three qubits. An error inflicted upon one of the qubits can be detected by applying the inverse encoding procedure and measuring the state of the ancillary bits.
How to keep systems of entangled qubits from decohering is an active area of research. The challenge is to reduce the error rate to around one error in every 10,000 steps, because then error-correction procedures can be successfully implemented to compensate for decay of individual qubits. Impressive theoretical advances in areas of quantum error correction and resilient quantum computation has been made in recent years although the main obstacle is the problem of interfacing to the system (input, output) while also protecting the quantum state from environmental decoherence.
Building a Quantum Computer
Building a scalable universal quantum computer is no mean feat. Most schemes focus on finding ways to minimize the interactions of the qubits with the environment. In fact, a physical system that is capable of implementing a quantum computer has to meet several stringent requirements, a list of which is given by DiVincenzo:
- A scalable physical system with well characterized qubits
- The ability to initialize the state of the qubits to a simple well-defined state
- Long relevant decoherence times, much longer than the gate operation time
- A universal set of quantum gates
- A qubit specific measurement capability
Various physical systems have been proposed for implementing a quantum computer. The common goal is to construct a functional machine that has a large number of qubits isolated well enough so as to have a low error rate. In the following paragraphs, I have summarized the current proposals of fabrication of a universal quantum computer.
- Optical photon quantum computer. Photons seem to be the natural choice for use as physical qubits because they can store quantum information in several ways including polarization. Furthermore, they can retain this information in spite of travelling hundreds of kilometers in air or optical fibers. Pairs of entangled photons that are vital inputs to a quantum computer can be created in a relatively straightforward manner. On the flip side, it is not easy to get these photons to interact with each other within the confines of the quantum computer – something that is needed for most quantum-computing processes.One option is to exploit a scheme called “linear optical quantum computing” (LOQC) that uses entangled photons as input to a quantum computer. Instead of having these photons interact while in the computer, results of the desired calculation can be obtained via the output photons by making specific measurements on some of the photons. LOQC seemed an attractively simple scheme because information is borne entirely by the photons and processed by components such as beam splitters, phase shifters, and detectors. However, this very simplicity leads to limitations, such as the lack of deterministic entangling operations, which are compensated for by using substantial hardware overheads. The process is also non-deterministic, thus not every attempt at performing the computation will succeed, which means even more resources are needed.Recently a group of researchers (Li, Humphreys, Mendoza, & Benjamin, 2015) have estimated the resource costs for implementing a fault-tolerant LOQC. They estimate that 100,000 detectors are required for each physical qubit in their hypothetical quantum computer. The numbers of mirrors and beam splitters would also be about 100,000. Furthermore, each logical qubit in the computer would comprise 1000 physical qubits to ensure fault tolerance. This means that a whopping 100 billion detectors would be needed to build a practical quantum computer comprising 1000 qubits.In summary, the prospect of fabrication of affordable light-based quantum computers is not very attractive as they come at a great cost.
- Optical cavity quantum electrodynamics. CQED experiments implement a situation so simple that their results can be cast in terms of the fundamental postulates of quantum theory. They are thus appropriate for tests of basic quantum properties: quantum superposition, complementarity or entanglement. In the context of quantum information processing, the atom and the cavity are long-lived qubits, and their mutual interaction provides a controllable entanglement mechanism – an essential requirement for quantum computing or teleportation applications.
- Ion traps. An ion trap is a system consisting of electric and magnetic fields which can capture ions and keep them at locations. Several ions can be arranged in a line at regular intervals in an ion trap. Rainer Blatt’s group at the University of Innsbruck, Austria, has successfully achieved this for up to fourteen ions. The researchers hope to scale up the technology up to a larger number of trapped ions.
- Electric current in a superconductor. John Martinis and this group at the University of California, Santa Barbara, have used superconductor technology to show how to perform quantum operations on one or two quantum bits with very high precision from 99.4% to 99.92%.
- Nuclear magnetic resonance (NMR).
- Josephson junctions. Josephson junctions are good candidates for the construction of quantum bits (qubits) for a quantum computer. This system is attractive because the low dissipation inherent to superconductors make possible, in principle, long coherence times. In addition, because complex superconducting circuits can be micro fabricated using integrated-circuit processing techniques, scaling to a large number of qubits should be relatively straightforward.
- Quantum Dots. Qubits can be constructed form electrons. An electron locked in a tiny semiconductor volume is used to produce what is known as a quantum dot. The spin turns the electron into a small permanent magnet. Researchers are able to manipulate the spin using an external magnetic field. The direction of the spin is then used to encode information.Quantum dots of outstanding quality could be constructed from electron holes as well. Electron holes are positively charged vacancies generated by removing specific electrons from the electron structure. Electron holes have intrinsic spins themselves that researchers are also able to manipulate using external magnetic fields to encode information.Quantum dots created from electrons have the disadvantage that nuclear spins of the surrounding atoms also generate magnetic fields, which can distort the external magnetic field in a random and unpredictable manner thereby interfering with the programming and reading of qubits. A much cleaner approach is to use holes because they are positively charged and are essentially decoupled from the positively charged nuclei of the surrounding atoms. As a result, quantum holes are virtually immune to interfering forces arising from nuclear spin.
- Silicon-based nuclear spin quantum computer.
Quantum Computing with Fractional Quantum Hall Effect
In the previous Section I summarized some of the proposed methods for building a quantum computer. Note that most share the common challenge of shielding the quantum information from external noise in the surrounding environment because conventional qubits are extremely fragile, being sensitive to heat and other forms of environmental disturbances. The slightest external interference will cause the entire system to decohere, so much so that the ever-so-vital superposition that allows the qubit be both 0 and 1 at the same time is destroyed. This means errors will creep into the calculations unless data is updated repeatedly to avoid the setting in of quantum decoherence.
To the early scholars of quantum computing the data corruption issue was almost an insurmountable obstacle. In the early 1990s, following an initial period of optimism, the hope of an actual fabrication of a quantum computer faded until Peter Shor published his error correction algorithm. From a theoretical standpoint, the issue of quantum decoherence appeared to be solved. It seemed that one could, after all, arrive at the correct answer with a margin of error as small as desired, provided the system could hold multiple copies of the information. But it was soon realized that Shor’s error correction algorithm did not function unless the error was less than one in 10,000 steps. As a direct consequence of this, the maximum number of qubits a prototype can muster today is around 7 while the expected number needed to beat the conventional super computer is around 32.
In 1997, Alexei Kitaev of the Landau Institute outside Moscow (now at Microsoft) published a revolutionary proposal that rekindled the hope for fault-tolerant quantum computation. Kitaev’s proposal was based on topology, using qubits distributed on the surface of a toroid. Physical systems possessing topological degrees of freedom are insensitive to local perturbations. Kitaev realized such systems would therefore automatically be protected against errors caused by local interactions with the environment.
But first, what is topology?
Topology is “the branch of mathematics concerned with those properties of geometric configurations that are unaltered by elastic deformation” (Das Sarma, Freedman, and Nayak, 2006). It describes the elastic deformation properties that remain intact when an object is stretched, twisted or deformed, but not torn apart. Topology is mathematically distinct from geometry. A ball and a cube have very different shapes geometrically, but from a topological standpoint, they are the same, because one can be converted into the other without having to make a hole or a cut. On the other hand, a bagel with a hole in the middle and a coffee cup with a hole in the handle belong to another topological category because they can also be remodeled to form each other’s shapes. Topological objects can thus contain one hole, or two, or three, or four, and so on, but this number has to be an integer. Similarly, a loop of string with a knot is topologically different from a loop without a knot. The only way to change the closed loop into a closed loop plus knot is to cut the string, tie the knot and then reseal the ends of the string together. Small perturbations do not change a topological property of an object and the only way to destroy it is by making a drastic change such as tearing or gluing different parts of it.
It was found that in certain materials with topological order (or symmetry), excitations manifest as topological defects distributed over many particles. These defects could be created, moved, and manipulated for the effective implementations in quantum computers. Since topological properties are unchanged by small perturbations, a built-in resistance to errors—caused by stray interactions with the surrounding environment—is formed and these systems do not seem to succumb to decoherence easily.
Topological defects or excitations have a special name—anyons, coined by Frank Wilczek of the Massachusetts Institute of Technology. Anyons are theoretically proposed strange, particle like structures that actually are excitations in a two-dimensional electron gas. Experiments have indicated that the two-dimensional electron gas formed in the thin layer between two slabs of semiconductor cooled to near absolute zero and subjected to strong magnetic fields attains a profoundly entangled ground state that is well separated from all excited states. Interestingly, low-energy particle excitations in such a system do not possess the quantum numbers that define the electrons; rather they are anyons, and carry electric charges that are fractions of the electron charge.
Anyons are quasiparticles that are similar to particles and antiparticles of high-energy physics although there are some important differences. To understand the differences, let us consider the principle of the indistinguishability in quantum physics. Elementary particles such as electrons are all identical. Hence an exchange operation of two electrons (swapping of their positions) in a many-electron system is a symmetry that leaves the underlying physics unchanged.
Indistinguishable particles are either fermions (example: electrons in a metal) or bosons (example: 4He atoms in a superfluid) depending on whether they possess half-integer or integer spins. But there is another important distinction. If the particles are bosons, then an exchange of two particles is represented by the identity operator and the wave function is invariant; we say the particles obey Bose statistics. If the particles are fermions, then an exchange is represented by multiplication by −1; the wave function changes sign, and we say that the particles obey Fermi statistics. Actually, when two particles are interchanged along some specified trajectories in three dimensions, the overall quantum state is multiplied by a phase factor . Two swaps are equivalent to the identity transformation, hence $latex . On the other hand, in two dimensions, the double swap corresponds to one particle making a full turn around the other and this process is topologically nontrivial. Therefore, the exchange phase φ can, in principle, have any value—hence the name “anyon”. In short, indistinguishable particles in two dimensions are neither bosons nor fermions; they are anyons.
Certain kind of anyons, called non-abelian anyons16, are excellent candidates for topological qubits that are capable of storing and manipulating information. These are remarkably stable and minor environmental prods do not cause a topological qubit to decohere.
How are these anyons useful for computation?
According to Feynman’s path integral formalism, the quantum mechanical amplitude for n particles at positions at time , evolving to a configuration at time , is given by a sum over all trajectories and each trajectory or world line is summed with weight , where is the trajectory’s classical action. Each anyon’s world line forms a thread, and the movements of the anyons as they are swapped, produce braiding of the threads (anyons separated in space cannot directly interact with each other but they can interact topologically if they move around each other). A clockwise swap creates one type of braid while a counterclockwise swap creates a different kind. Distinct braids can encode different computational tasks, and those structures are topologically stable. It is possible to track how many times the braiding happens and one is even able to manipulate them without destroying the superposition, making a topological computer inherently more robust.
But are anyons real or just theoretical entities?
Recent experiments in a field known as fractional quantum Hall physics have indicated that anyons are realistic structures, not just theoretical entities, and further experiments are underway to study the feasibility of a topological quantum computer made of anyonic qubits. In the quantum Hall effect, anyons move relatively freely in the layer between the semi-conductors and form something called a topological quantum fluid displaying surprising characteristics. The anyons’ collective motion is described by its conductance and, because of topology, it varies in steps—it is quantized. The mysterious phenomenon exhibited by quantum Hall effect was first described theoretically using topology by David Thouless, of University of Washington, Seattle who shared the 2016 Nobel Prize in Physics with F. Duncan M. Haldane of Princeton University, and J. Michael Kosterlitz of Brown University, Providence. The trio used advanced mathematical methods to explain “strange phenomena in unusual phases (or states) of matter, such as superconductors, superfluids or thin magnetic films.” Kosterlitz and Thouless studied phenomena that arise in a two-dimensional flat world, for example, on surfaces or inside extremely thin layers. Haldane studied matter that forms threads so thin they can be considered one-dimensional.
Theoretical studies indicated certain fractional quantum Hall quasiparticles are indeed non-abelian and “the most likely candidate for finding a topological state with non-abelian anyonic excitation is the fractional quantum Hall state at the plateau observed at .”
In 2005, a team of researchers at Florida State University and Lucent Technologies’ Bell Laboratories lead by Nicholas E. Bonesteel, demonstrated how to construct a controlled NOT (or CNOT) gate to an accuracy of two parts in 103 by braiding six anyons. A CNOT gate takes two inputs: a control bit and a target bit. If the control bit is 1, it changes the target bit from 0 to 1, or vice versa. Otherwise the bits are unaltered.
Being able to construct a topological logic gate is a very significant result. We have seen earlier how any computation can be built from a network of CNOT gates. In a recent exciting development, Sankar Das Sarma of the University of Maryland, Michael Freedman of Microsoft Corp, and Chetan Nayak of UCLA outlined how one could construct a topological logic gate from gallium arsenide where the error rate for the process would be or less. Such a tiny error rate occurs because the probability of errors is exponentially suppressed as the temperature is lowered and the length scale increased. That exponential factor is the essential contribution of topology, and it has no analog in the more traditional approaches to quantum computing.
In Section 19.3.7 we saw how scientists suggested ingenious methods for quantum error correction by exploiting the phenomenon of entanglement that enabled them to check the data without making any actual measurement, thereby preserving the superposition. The particularly desirable quality of topological quantum computing is that error correction is almost unnecessary as they have built-in protection from outside interference. Their great advantage is aptly summarized in Graham Collin’s Scientific American article (Collins, 2006) “The promise of extraordinarily low error rates—many orders of magnitude lower than those achieved by any other quantum computation scheme to date—is what makes topological quantum computing so attractive. Also, the technologies involved in making fractional quantum Hall devices are mature, being precisely those of the microchip industry; the only catch is that the devices have to operate at extremely low temperatures—on the order of milli-kelvins—for the magical quasiparticles to be stable.”
“If non-abelian anyons actually exist, topological quantum computers could well leapfrog conventional quantum computer designs in the race to scale up from individual qubits and logic gates to fully fledged machines more deserving of the name “computer.” Carrying out calculations with quantum knots and braids, a scheme that began as an esoteric alternative, could become the standard way to implement practical, error free quantum computation.”
Surface Code Quantum Computing
Surface codes computing is another method of great potential for fabricating fault-tolerant computers. The idea evolved from Alexei Kitaev’s efforts to develop simple models for topological order, using qubits distributed on the surface of a toroid. Bravyi and Kitaev (Bravyi & Kitaev, 1998) as well as Freedman and Meyer (Freedman & Meyer, 2005) subsequently showed that the toroidal geometry is unnecessary, and planar versions (thus “surface codes”) consisting of 2-D grid of qubits could be developed that is capable of performing calculations. The qubits would be arranged in a checkerboard pattern where “white squares” would hold data qubits for performing operations and “black squares” would contain measurement qubits that detect and correct errors in the neighboring data qubits. Thus, in this setup, the surface code can indirectly measure possible errors in the data qubits without disturbing their delicate quantum states.
A significant advantage of surface codes is their relative tolerance to local errors. Preskill and co-workers (Wang, Harrington, & Preskill, 2003) were the first to show how the critical logical CNOT operation could be implemented using stacked layers of surfaces. Although such a three-dimensional structure could potentially complicate future physical implementations, the ability of the surface codes to withstand large error rates is a very attractive feature—almost 3% per surface code clock cycle, assuming the ability to measure a four-qubit operator.
John Martinis and co-workers (Barends , et al., 2014) report the construction of such a surface code system with five qubits in a row made from superconducting devices. “Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication.” The system performs with fidelity that is at the threshold for quantum error correction, suggesting that error-free quantum computing should be possible. The platform is expected to scale up to larger numbers of qubits and two-dimensional architecture. The abstract of their paper is particularly illuminating: “For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbor coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbor coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.”
Recently, remarkable success has been reported with a particular “flavor” of a quantum computer, called a quantum annealer. Developed by a Canadian company called D-Wave Systems, a quantum annealer is another method of quantum computing that is not universal but instead can be used to build an “adiabatic” quantum computer that can solve a specific problem, called an NP-hard optimization, which involves finding the global minimum value of a highly complex function.
An example of NP-hard optimization is the “maximum clique problem” that deals with finding the largest cluster of people who know each other in a city. One approach will be to identify every group of mutual friends and compare their sizes until the largest group is found. Such a brute-force approach would only work for a small village, but for a large city like Tokyo, the approach becomes untenable. Another example is the traveling salesman problem, which involves identifying the most efficient route that a travelling salesman may take to visit every city in his list and eventually returning to the starting point, his home. Many such optimization problems exist in computer science, particularly in the field of machine learning.
In 2015, Google researchers announced that a quantum annealer called the D-Wave 2X machine had successfully executed a task 100 million times faster than a classical computer—a feat that appeared to show that the machine completed in one second a task that potentially could take a classical computer three years to complete. Annealing relies on quantum tunneling and lets an initially simple system evolve “very slowly” towards the desired result.
The D-Wave 2X processor consists of approximately 1,000 qubits of minute superconducting loops of a metal. Due to superposition, each superconducting loop produces a magnetic field that points both up or down at the same time. As the strength of a sideways magnetic field is decreased, the qubits tend to collapse into the lowest-energy states from their disorderly higher energy states. Due to the probabilistic nature of quantum computers, this process must be repeated several times to ensure that the D-Wave system succeeds in tunneling from solution to solution until it reaches the lowest energy state which corresponds to finding the global minimum of the function (the problem’s solution).
Whether the D-Wave machine indeed uses quantum effects to solve real-world problems has been a contentious issue among scientists for a long time. Google’s recent study seems to have settled this issue, once for all, although it is important to realize that annealers are capable of finding perfectly acceptable solutions that may not be the best one because the system could always tunnel to a low-energy state that isn’t necessarily the lowest possible one.