Quantum Computing. Melanie Swan

Чтение книги онлайн.

Читать онлайн книгу Quantum Computing - Melanie Swan страница 26

Quantum Computing - Melanie Swan Between Science and Economics

Скачать книгу

problems that are known to be difficult. These include Shor’s factoring algorithm (which could likely break the current RSA cryptography standard) and Grover’s search algorithm (for faster search through large datasets). Nevertheless, in the shorter term, even with NISQ devices, it may be possible to make significant progress in the areas of optimization, simulation, machine learning, and cryptography. One application with significant economic promise for quantum computing is simulating physical processes in chemistry, physics, and biology to discover new materials and pharmaceutical substances.

      Although error correction is not a substantial issue for NISQ devices, it could start to become a problem in more sophisticated quantum computing systems given the sensitivity of qubits to environmental noise. Predictions as to the number of qubits for which error correction will be required vary considerably. One estimate is presented in Table 4.1 (McMahon, 2018), as a strawman sketch of the different kinds of applications and the number of qubits needed. Many factors could play a role in error correction, including the hardware approach, chip architecture, and quantum algorithm design. Due to the short-term unavailability of quantum error-correction schemes, clever workarounds have been developed. These include using NISQ devices for quantum simulation using error-resilient algorithms (Colless et al., 2018), hardware that is resistant to errors (Kandala et al., 2017), and other error mitigation techniques (Kandala et al., 2019).

      One step in the quantum computing roadmap is demonstrating quantum advantage (a clear advantage of using quantum computing versus classical computing). Substantiating quantum advantage is a largely academic hurdle that is expected to be achievable with the NISQ devices that are currently available with 30–70 qubits. After quantum advantage, the next class of quantum computing applications is optimization and simulation in domains ranging from chemistry, physics, and biology to economics and finance. Further applications involve quantum machine learning and quantum cryptography. Quantum machine learning refers to both the application of machine learning techniques to the quantum domain and using quantum mechanical systems to extend research in machine learning. Quantum cryptography contemplates an extensive suite of applications such as quantum key distribution, cryptographic algorithms, and quantum-secure zero-knowledge proofs.

ApplicationEstimated # qubits required
Quantum advantage50
Quantum optimization and sampling~100
Quantum simulation>100
Chemistry simulation/nitrogen fixationFew hundred
Applications requiring error correction>1 million
Shor’s algorithm (factoring)>50 million
Grover’s algorithm (search)>100 million

      The advent of quantum computing calls into sharper relief theories for analyzing the kinds of problems that are computable. Computability relates to the theory of computation, and investigates how efficiently problems can be solved based on different kinds of algorithms and models of computation. Quantum computing might extend the range of the kinds of problems that can be computed efficiently, but would not allow the computability of all problems. Quantum computing could provide an incremental yet crucial extension to the kinds of real-world problems that might be solvable.

      In computational complexity, two parameters are studied, the required computation resources in terms of time and space that are necessary to calculate the problem. The theoretical basis for computational complexity is the Church–Turing computability thesis (Table 4.2). The original Church–Turing thesis is concerned only with the theoretical status of whether a given problem is computable at all, irrespective of the time required. The extended Church–Turing thesis also incorporates the practical consideration of whether a problem is efficiently computable in time. A given problem might fall within any of various tiers of time complexity and space complexity in the hierarchy of computational complexity classes.

      As an extremely broad heuristic, quantum computers may allow a one-tier increase in computing according to the computational complexity schema. For example, a problem that requires exponential time in classical systems (i.e. time that is too long for practical results) may take polynomial time in quantum systems (i.e. a reasonable amount of time for practical use). In the canonical Traveling Salesman Problem, it may be possible to check twice as many cities in half the time using a quantum computer.

Church–Turing thesisProblem answered
Church–Turing (original)Is the problem computable? (ignoring time)
Church–Turing (extended)Is the problem efficiently computable in time?

      One of the biggest challenges to the potential instantiation of universal quantum computers is quantum error correction. Qubits are more fragile than classical bits and need to be error-corrected if they become damaged. It is too early in the development of quantum computing to estimate exactly which hardware approaches will require error correction and at what point (with certain numbers of qubits). However, since qubits are affected by both state decay and environmental noise, some kind of error correction is likely to be required. Methods for error correction are largely a research effort at present, and more clarity may arrive with implementation.

      The two kinds of commercially available systems (as of June 2019) use superconducting circuits, but in different architectures. There is the standard gate model (with 30–70 qubits) and the quantum annealing model (with 2048 qubits). The trade-off is that the gate model aims to be more of a universal quantum computer, whereas the quantum annealing model can only accommodate certain classes of optimization problems. On the one hand, annealing machines harness the natural evolution of quantum systems over time to settle into the lowest-energy state (a minimum). The intermediate operation of the quantum process cannot be controlled. On the other hand, gate model machines seek to control and manipulate the evolution of the quantum system at each moment in time, which suggests that they can be used to solve a much larger and more general set of problems. For error correction, the benefit is that with less manipulation, less errors arise, and the annealing machine does not require the same kinds of error correction that the gate model machine does, at least at the current number of qubits.

image

      Other approaches to quantum information systems claim that error correction is not an immediate concern (Table 4.3), although the path to scaling up the number of qubits is perhaps less clear than with superconducting circuits. These other methods, although still in the research phase, are designed such that error-correction requirements may be greatly reduced. These systems include ion trapping, fermion braiding, and photonic quantum computing. In particular, the fermion braiding approach is robust to noise since changes in geometry do not have an effect, only changes in topology, and the errors can be corrected in hardware instead of software.

      Quantum information has different properties than classical information and is much more sensitive to being damaged by the computing environment. Whereas in a classical system, the idea is to pack in as many bits as possible, in a quantum system, the aim is to have only as many high-fidelity qubits as can be effectively controlled, and scale up with that level of integrity. It is difficult

Скачать книгу