From Traditional Fault Tolerance to Blockchain. Wenbing Zhao

Чтение книги онлайн.

Читать онлайн книгу From Traditional Fault Tolerance to Blockchain - Wenbing Zhao страница 10

From Traditional Fault Tolerance to Blockchain - Wenbing Zhao

Скачать книгу

Dependency templates for nodes, processes, network paths, and the neighbor sets.

      3.9 A partial dependency graph for an example system.

      3.10 The error function.

      3.11 A hypothetical dependency graph with abnormality for each component and the weight for each edge labeled.

      3.12 The components that form a cycle in the f-map are reduced to a single unit in the r-map for recursive recovery.

      3.13 The architecture of an Operator Undo framework.

      4.1 The replication algorithm is typically implemented in a fault tolerance middleware framework.

      4.2 Active replication, without (top) and with (bottom) voting at the client.

      4.3 Passive replication.

      4.4 Semi-active replication.

      4.5 A write-all algorithm for data replication.

      4.6 The problem of the write-all-available algorithm for data replication.

      4.7 Preventing a transaction from accessing a not-fully-recovered replica is not sufficient to ensure one-copy serializable execution of transactions.

      4.8 An example run of the quorum consensus algorithm on a single data item.

      4.10 An example run of a system with three sites that uses Lamport clocks.

      4.11 An example run of a system with three sites that uses vector clocks.

      4.12 An example for the determination of the new version vector value after reconciling a conflict.

      4.13 An example operation propagation using vector clocks in a system with three replicas.

      4.14 An example for operation propagation using timestamp matrices in a system with three replicas.

      4.15 Update commit using ack vectors in a system with three replicas.

      4.16 Update commit using timestamp matrices in a system with three replicas.

      4.17 An illustration of the CAP theorem.

      4.18 Partition mode and partition recovery.

      5.1 Examples of systems that ensure uniform total ordering and nonuniform total ordering.

      5.2 In the sequencer based approach, a general system is structured into a combination of two subsystems, one with a single receiver and the other with a single sender of broadcast messages.

      5.3 An example rotation sequencer based system in normal operation.

      5.4 Normal operation of the membership view change protocol.

      5.5 Membership change scenario: competing originators.

      5.6 Membership change scenario: premature timeout.

      5.7 Membership change scenario: temporary network partitioning.

      5.8 A simplified finite state machine specification for Totem.

      5.9 A successful run of the Totem Membership Protocol.

      5.10 Membership changes due to a premature timeout by N2.

      5.11 Messages sent before N1 fails in an example scenario.

      5.12 Messages delivered during recovery for the example scenario.

      5.13 Message sent before the network partitions into two groups, one with {N1, N2}, and the other with {N3, N4, N5}.

      5.14 Messages delivered during recovery in the two different partitions for the example scenario.

      6.1 Normal operation of the Paxos algorithm.

      6.2 A deadlock scenario with two competing proposers in the Paxos algorithm.

      6.3 If the system has already chosen a value, the safety property for consensus would hold even without the promise-not-to-accept-older-proposal requirement.

      6.4 If two competing proposers propose concurrently, the system might end up choosing two different values without the promise-not-to-accept-older-proposal requirement.

      6.5 With the promise-not-to-accept-older-proposal requirement in place, even if two competing proposers propose concurrently, only a single value may be chosen by the system.

      6.6 Normal operation of Multi-Paxos in a client-server system with 3 server replicas and a single client.

      6.7 View change algorithm for Multi-Paxos.

      6.8 With reconfigurations, a group of 7 replicas (initially 5 active and 2 spare replicas) can tolerate up to 5 single faults (without reconfigurations, only up to 3 faults can be tolerated).

      6.9 The Primary and secondary quorums formation for a system with 3 main replicas and 2 auxiliary replicas.

      6.10 The Primary and secondary quorums formation as the system reconfigures due to the failures of main replicas.

      6.11 Normal operation of Cheap Paxos in a system with 3 main replicas and 1 auxiliary replica.

      6.12 The Primary and secondary quorums formation for a system with 3 main replicas and 2 auxiliary replicas.

      6.13 Normal operation of (Multi-) Fast Paxos in a client-server system.

      6.14 Collision recovery in an example system.

      6.15 Expansion of the membership by adding two replicas in method 1.

      6.16 Expansion of the membership by adding two replicas in method 2.

      6.17 Reduction of the membership by removing two replicas one after another.

      7.1 Two scenarios that highlight why it is impossible to use 3 generals to solve the Byzantine generals problem.

      7.2 The message flow and the basic steps of the OM(1) algorithms. 252

      7.3 The message flow and the basic steps of the OM(2) algorithms. 254

      7.4 Normal operation of the PBFT algorithm.

      7.6 A worst case scenario for tentative execution.

      7.7 Normal operation of Fast Byzantine fault tolerance.

      7.8 Zyzzyva agreement protocol (case 1).

      7.9 Zyzzyva agreement protocol (case 2).

      7.10 A corner case in view change in Zyzzyva.

Скачать книгу