Federated Learning. Yang Liu

Чтение книги онлайн.

Читать онлайн книгу Federated Learning - Yang Liu страница 12

Federated Learning - Yang  Liu Synthesis Lectures on Artificial Intelligence and Machine Learning

Скачать книгу

privacy is achieved via generalization and suppression mechanisms.

      Model Poisoning Attacks. It has been shown that federated learning may be vulnerable to model poisoning attacks [Bhagoji et al., 2019], also known as backdoor attacks [Bagdasaryan et al., 2019]. For example, a malicious participant in federated learning may inject a hidden backdoor functionality into the trained federated model, e.g., to cause a trained word predictor to complete certain sentences with an attacker-chosen word [Bagdasaryan et al., 2019]. Bhagoji et al. [2019] proposed a number of strategies to carry out model poisoning attacks, such as boosting of the malicious participant’s model update, an alternating minimization strategy that alternately optimizes for the legit training loss and the adversarial backdoor objective, and using parameter estimation for the benign updates to improve attack success. Bagdasaryan et al. [2019] developed a new model-poisoning methodology using model replacement, where a constrain-and-scale technique is proposed to evade anomaly detection-based defenses by incorporating the evasion into the attacker’s loss function during model training. Possible solutions against model poisoning attacks include blockchain-based approaches [Preuveneers et al., 2018] and trusted execution environment (TEE) based approaches [Mo and Haddadi, 2019].

      For cryptographic PPML techniques, including MPC and HE, two types of adversaries are concerned in the literature.

      • Semi-honest adversaries. In the semi-honest (a.k.a honest-but-curious, and passive) adversary model, the adversaries abide by the protocol honestly, but also attempt to learn more information beyond the output from the received information.

      • Malicious adversaries. In the malicious (a.k.a. active) adversary model, the adversaries deviate from the protocol and can behave arbitrarily.

      The semi-honest adversary model is widely considered in most PPML studies. The main reason is that, in federated learning, it is beneficial to each party to honestly follow the ML protocol, since malicious behaviors also break the benefits of the adversaries themselves. The other reason is that, in cryptography, it is a standard method to build a protocol secure against semi-honest adversaries first, then modify it to be secure against malicious adversaries via zero-knowledge proof.

      For both security models, the adversaries corrupt a fraction of the parties, and the corrupted parties may collude with each other. The corruption of parties can be static or adaptive. The complexity of an adversary can be either polynomial-time or computational unbounded, corresponding to information-theoretic secure and computational secure, respectively. The security in cryptography is based on the notion of indistinguishability. Interested readers can refer to Lindell [2005] and Lindell and Pinkas [2009] for detailed analysis of adversary and security models.

      In this section, we discuss privacy preservation techniques. We cover three types of such approaches, namely (1) MPC, (2) HE, and (3) DP.

      Secure Multi-Party Computation (MPC), a.k.a. secure function evaluation (SFE), was initially introduced as a secure two-party computation problem (the famous Millionaire’s Problem), and generalized in 1986 by Andrew Yao [1986]. In MPC, the objective is to jointly compute a function from the private input by each party, without revealing such inputs to the other parties. MPC tells us that for any functionality, it is possible to compute it without revealing anything other than the output.

       Definition

      MPC allows us to compute functions of private input values so that each party learns only the corresponding function output value, but not input values from other parties. For example, given a secret value x that is split into n shares so that a party Pi only knows xi, all parties can collaboratively compute

Image

      so that party Pi learns nothing beyond the output value yi corresponding to its own input xi.

      The standard approach to prove that an MPC protocol is secure is the simulation paradigm [Lindell, 2017]. To prove an MPC protocol is secure against adversaries that corrupt t parties under the simulation paradigm, we build a simulator that, when given inputs and outputs of t colluding parties, generates t transcripts, so that the generated transcripts are indistinguishable to that generated in the actual protocol.

      In general, MPC can be implemented through three different frameworks, namely: (1) Oblivious Transfer (OT) [Keller et al., 2016, Goldreich et al., 1987]; (2) Secret Sharing (SS) [Shamir, 1979, Rabin and Ben-Or, 1989]; and (3) Threshold Homomorphic Encryption (THE) [Cramer et al., 2001, Damgård and Nielsen, 2003]. From a certain point of view, both oblivious transfer protocols and threshold homomorphic encryption schemes use the idea of secret sharing. This might be the reason why secret sharing is widely regarded as the core of MPC. In the rest of this section, we will introduce oblivious transfer and secret sharing.

       Oblivious Transfer

      OT is a two-party computation protocol proposed by Rabin in 1981 [Rabin, 2005]. In OT, the sender owns a database of message-index pairs (M1, 1),…, (MN, N). At each transfer, the receiver chooses an index i for some 1 ≤ iN, and receives Mi. The receiver does not learn any other information about the database, and the sender does not learn anything about the receiver’s selection i. Here, we give the definition of 1-out-of-n OT.

      Definition 2.1 1-out-of-n OT: Suppose Party A has a list (x1,…, xn) as the input, Party B has i ∈ 1,…, n as the input. 1-out-of-n OT is an MPC protocol where A learns nothing about i and B learns nothing else but xi.

      When n = 2, we get 1-out-of-2 OT which has the following property: 1-out-of-2 OT is universal for two-party MPC [Ishai et al., 2008]. That is, given a 1-out-of-2 OT protocol, one can conduct any secure two-party computation.

      Many Constructions of OT has been proposed such as Bellare–Micali’s [Bellare and Micali, 1990], Naor–Pinka’s [Naor and Pinkas, 2001], and Hazay–Lindell’s [Hazay and Lindell, 2010] approaches. Here, we demonstrate the Bellare-Micali’s construction of OT, which utilizes Diffie–Hellman key exchange and is based on the computational Diffie–Hellman (CDH) assumption [Diffie and Hellman, 1976]. The Bellare–Micali’s construction works as follows: the receiver sends two public keys to the sender. The receiver only holds one private key corresponding to one of the two public keys, and the sender does not know which public key it is. Then, the sender encrypts the two massages with their corresponding public keys, and sends the ciphertexts to the receiver. Finally, the receiver decrypts the target ciphertext with the private key.

      Bellare–Micali Construction. In a discrete logarithm setting (G, g, p), where G is a group of prime order p, g ∈ G is a generator, and H : G → {0,1}n is a hash function. Suppose the sender A has x0, x1 ∈ {0,1}n, and the receiver B has b ∈ {0,1}.

      1. A chooses a random element cG and sends it to B.

      2. B chooses k ← Zp and sets PKb, = gk, PK1–b, = c/PKb, then sends PK0 to A. A sets PK1 = c/PK0.

      3. A encrypts x0 with ElGamal scheme using PK0

Скачать книгу