Quantum Computing. Melanie Swan
Чтение книги онлайн.
Читать онлайн книгу Quantum Computing - Melanie Swan страница 10
Zero-knowledge proofs are computational proofs that are a mechanistic set of algorithms that could be easily incorporated as a feature in many technology systems to provide privacy and validation. Zero-knowledge proofs are proofs that reveal no information except the correctness of the statement (data verification is separated from the data itself, conveying zero knowledge about the underlying data, thereby keeping it private). The proofs are used first and foremost to prove validity, for example, that someone is who they claim to be. Proofs are also an information compression technique. Some amount of activity is conducted and the abstracted output is all that is necessary as the outcome (the proof evaluates to a one-bit True/False answer or some other short output). The main concept of a proof is that some underlying work is performed and a validated short answer is produced as the result.
Quantum error correction is necessary to repair quantum information bits (qubits) that become damaged, by adhering to quantum properties such as the no-cloning theorem (quantum information cannot be copied) and the no-measurement rule (quantum information cannot be measured without damaging it). Consequently, quantum error correction involves relying upon entanglement among qubits to smear out the original qubit’s information onto entangled qubits which can be used to error-correct (restore) the initial qubit. The proximate use is quantum error correction. However, the great benefit is that a structural feature is created in the form of the error correction apparatus that can be more widely deployed. An error correction-type architecture can be used for novel purposes. One such project deploys the error correction feature to control local qubit interactions in an otherwise undirectable quantum annealing solver (Lechner et al., 2015). The overall concept is system manipulation and control through quantum error correction-type models.
Hash functions are another example of a general-purpose smart network technology whose underlying mechanism is not new, but is finding a wider range of uses. Like proofs, hash functions are a form of information compression technology. A hash function is an algorithm that can be run over any arbitrarily large-sized digital data file (e.g. a genome, movie, software codebase, or 250-page legal contract) which results in a fixed-length code, often 32 bytes (64 alphanumeric characters). Hash functions have many current uses including password protection and the sending of secure messages across the internet such that a receiving party with the hashing algorithm and a key can decrypt the message. Hash functions are also finding novel uses. One is that since internet content can be specified with a URL, the URL can be called with a hash format by other programs (the hash function standardizes the length of the input of an arbitrarily-long URL). This concept is seen in Web 3.0 as hash-linked data structures.
The key point is the development of generic feature sets in smart network technologies that can be translated to other uses. This is not a surprise, since a property of new technology is that its full range of applications cannot be envisioned at the outset, and evolves through use. The automobile was initially conceived as a horseless carriage. What is noticeable is the theme that these features are all forms of information compression and expansion techniques (proofs and hash functions compress information and error correction expands information). This too is not surprising, given that these features are applied to information theoretic domains in which a key question is the extent to which any signal is compressible (and more generally signal-to-noise ratios and the total possible system configurations (entropy)). However, proofs and hash functions are different from traditional information compression techniques since they can convert an arbitrarily-large input to a fixed output, which connotes the attributes of a flexible and dynamical real-time system. This book (especially Chapter 15) extends these insights to interpret the emerging features (proofs, error correction, and hash functions) and quantum smart networks more generally, in a dimensional model (the bulk–boundary correspondence). In the bulk–boundary correspondence, the compression or expansion activity is performed in a higher-dimensional region (the bulk), and then translated such that the result appears in one fewer dimensions in another region (the boundary).
1.3Chapter Highlights
This book aims to provide insight as to how quantum computing and quantum information science as a possible coming paradigm shift in computing may influence other high-impact digital transformation technologies such as blockchain and machine learning. A theoretical connection between physics and information technology is established. Smart network theory is proposed as a physical theory of network technologies that is extensible to their potential progression to quantum mechanical information systems. The purpose is to elaborate a physical basis for technology theories that is easily-deployable in the design, operation, and catalytic emergence of next-generation smart network systems. This work proposes the theoretical construct of smart network theories, specifically a smart network field theory (SNFT) and a smart network quantum field theory (SNQFT), as a foundational basis for the further development of smart network systems, and particularly quantum smart networks (smart network systems instantiated in quantum information processing environments).
There are pressing reasons for the development of smart network theories as macro-level system control theories since many smart network technologies are effectively a black box whose operations are either unknown from the outset (deep learning networks), or becoming hidden through confidential transactions (blockchain-based economic networks). Such smart networks are complex systems whose possibility for system criticality and nonlinear phase transition is unknown and possibly of high magnitude.
Towards this end, Part 1 introduces smart networks and quantum computing. Chapter 2 defines smart networks and smart network theory, and develops the smart network field theory in the classical and quantum domains. Chapter 3 provides an overview of quantum computing including the basic concepts (such as bit and qubit) and a detailed review of the different quantum hardware approaches and superconducting materials. A topic of paramount concern, when it might be possible to break existing cryptography standards with quantum computing, is addressed (estimated unlikely within 10 years, however methods are constantly improving). Chapter 4 considers advanced topics in quantum computing such as interference, entanglement, error correction, and certifiably random bits as produced by the NIST Randomness Beacon.
Part 2 provides a detailed consideration of blockchains and zero-knowledge proofs. Chapter 5 elaborates a comprehensive range of current improvements underway in classical blockchains. Chapter 6 discusses the quantum internet, quantum key distribution, the risks to blockchains, and proposals for instantiating blockchain protocols in a quantum format. Chapter 7 consists of a deep-dive into zero-knowledge proof technology and its current status and methods. Chapter 8 elaborates post-quantum cryptography and quantum proofs.
Part 3 focuses on machine learning and artificial intelligence. Chapter 9 discusses advances in classical machine learning such as adversarial learning and dark knowledge (also an information compression technique), and Chapter 10 articulates the status of quantum machine learning. The first kinds of applications being implemented in quantum computers are machine learning-related since both machine learning and quantum computation methods are applied in trying to solve the same kinds of optimization and statistical data analysis problems.