Deep learning is widely used in artificial intelligence fields such as computer vision, natural language recognition, and intelligent robots. With the development of deep learning, people’s expectations for this technology are increasing daily. Enterprises and individuals usually need a lot of computing power to support the practical work of deep learning technology. Many cloud service providers provide and deploy cloud computing environments. However, there are severe risks of privacy leakage when transferring data to cloud service providers and using data for model training, which makes users unable to use deep learning technology in cloud computing environments confidently. This paper mainly reviews the privacy leakage problems that exist when using deep learning, then introduces deep learning algorithms that support privacy protection, compares and looks forward to these algorithms, and summarizes this aspect’s development.

In recent years, with the continuous development of computer technology and the rise of artificial intelligence, deep learning has achieved remarkable achievements in various applications such as image classification [

The need for deep learning to leverage and use cloud computing resources naturally raises privacy concerns. Cloud computing providers will package and deploy their deep learning application services and directly provide users to rent. Cloud providers can quickly access critical unauthorized knowledge, such as private input data, and the classification results are modeled without proper security mechanisms. Due to privacy issues, customers may be unwilling or unable to provide data to service providers, such as assisting in medical diagnosis [

This section discusses cryptographic techniques applied to deep learning privacy protection, including homomorphic encryption, differential privacy, image encryption, and secure multi-party computation.

Homomorphic encryption technology provides a solution to the privacy protection problem in the neural network model. Homomorphic encryption includes partial homomorphic encryption and fully homomorphic encryption. Ciphertext operations that only support a limited number of additions or multiplications are called addition or multiplication homomorphisms; if any number of addition and multiplication ciphertext operations can be performed, it is fully homomorphic encryption. The classic RSA encryption algorithm [

HE supports operations in ciphertext, and the result of decryption is the same as that obtained in plaintext, that is: _{1}, _{2})) = _{1}), _{2})). The homomorphic encryption scheme, like other types of encryption schemes, has the following three functional modules:

The ciphertext homomorphic operation also includes homomorphic addition and homomorphic multiplication. The homomorphic addition operation _{1}, _{2})) = _{1}+_{2}, _{1} and _{2} respectively correspond to _{1} under the ciphertext and _{2}, the sum of the ciphertext information calculated according to the calculation key _{1}+_{2}; the homomorphic multiplication operation _{1}, _{2})) = _{1} × _{2}, _{1} and _{2} corresponds to _{1} and _{2} under the ciphertext, and the product of the ciphertext information calculated according to the calculation key _{1} × _{2}.

The Initial letter of each notional word in all headings is capitalized. Differential privacy [

_{1} and _{2} are databases with a difference of 1 record respectively, and

There are many differences between digital images and traditional text data. Image encryption technology is a technology that uses the characteristics of time, space, and visual perception of images to design encryption algorithms to improve image security. Image encryption refers to the process of changing a plaintext image into a ciphertext image under the constraints of an encryption function and a key. The entire encryption, decryption and transmission process can be described as follows: the plaintext image becomes a ciphertext image after being encrypted, and the sender of the message transmits the ciphertext image to the receiver on an insecure channel, and the receiver uses the decryption key to convert the ciphertext image to the recipient. It is decrypted into a plaintext image, and the key required for encryption and decryption is transmitted on a secure channel.

Chaos has the characteristics of pseudo-randomness and sensitivity to initial conditions, which is consistent with the encryption requirements of images, and the principle of chaos is of great help to digital image encryption. Many scholars have combined chaos technology with encryption technology. Therefore, chaos Technology is widely used in image encryption, and it is also very important in using image encryption technology to protect deep learning privacy. There are three common chaotic encryption methods, namely image encryption based on grayscale replacement, image encryption based on pixel scrambling, and iterative image encryption.

Secure Multi-Party Computation [

Deep learning privacy protection based on secure multi-party computing has attracted a large number of researchers to study, mainly using the following privacy protection tools, garbled circuit [

This section mainly discusses deep learning applications based on the cryptography technology introduced in Section 2, including deep learning algorithms that support privacy protection based on homomorphic encryption, differential privacy, image encryption, and secure multi-party computing.

The emergence of homomorphic encryption technology provides a way to solve the privacy problem of deep learning, which can realize the operation under the ciphertext without affecting the correct result of decryption. Using homomorphic encryption technology to protect the privacy of deep learning applications, we can directly predict the encrypted data in the prediction stage. The prediction results are ciphertexts and return the results to the user for decryption to protect user data privacy. We can also directly participate in training to protect the security of the training data uploaded by users.

In 2006, Barni et al. [

Although the above methods can be applied to deep networks, there are still problems of low accuracy and high complexity of ciphertext calculations. In order to improve the accuracy of the homomorphic encryption scheme and the efficiency of ciphertext operations, the paper [

Differential privacy algorithm, which has strict mathematical proof, is one of the more popular privacy protection technologies. The basic idea of the differential privacy algorithm is to limit the sensitivity of the database query results to any piece of data to ensure that any piece of data in the data set or not in the data set has little impact on the final query output [

The combination of differential privacy algorithms and deep learning has been one of the research hotspots in the field of deep learning privacy protection in recent years. This algorithm can be applied to various parts of the deep learning model. The differential privacy protection scheme of the input layer [

Compared with the above two differential privacy protection schemes, the differential privacy protection scheme that adds noise to the training parameters in the hidden layer is more widely used. Abadi et al. [

In 2018, AprilPyone et al. [_{R}, _{G}, _{B}}, _{C} is a generated random binary integer, and then perform positive and negative transformations on each pixel value of the private image. Based on the image encryption proposed in [_{1} to encrypt training pictures and another set of _{2}, entirely different from _{1}, to encrypt test images [

The deep learning privacy protection algorithm based on secure multi-party computing has attracted many researchers to study. Schemes based on secure multi-party computation usually involve multiple parties, such as an additional trusted party or multiple non-colluding cloud servers, and a more significant number of interactions. In 2017, Mohassel et al. [

Secret sharing is a commonly used privacy protection method in secure multi-party computing. Compared with other privacy protection methods in production applications, it can reduce overhead and improve efficiency. Ma et al. [

With the development of deep learning, more and more people have paid attention to the importance of privacy in deep learning. Some existing privacy protection schemes still have some limitations.

This paper sorts out based on the introduction of standard privacy protection methods. It summarizes the existing deep learning algorithms based on homomorphic encryption, differential privacy, image encryption, and multi-party security computing. The limitations of existing privacy protection methods in deep learning applications are summarized systematically. In the future, as people pay more attention to privacy issues, deep learning applications supporting privacy protection will continue to evolve and improve.

The authors received no specific funding for this study.

The authors declare that they have no conflicts of interest to report regarding the present study.