Generative models learn the distribution of data from a sample dataset and
can then generate new data instances. Recent advances in deep learning has
brought forth improvements in generative model architectures, and some
state-of-the-art models can (in some cases) produce outputs realistic enough to
fool humans.

We survey recent research at the intersection of security and privacy and
generative models. In particular, we discuss the use of generative models in
adversarial machine learning, in helping automate or enhance existing attacks,
and as building blocks for defenses in contexts such as intrusion detection,
biometrics spoofing, and malware obfuscation. We also describe the use of
generative models in diverse applications such as fairness in machine learning,
privacy-preserving data synthesis, and steganography. Finally, we discuss new
threats due to generative models: the creation of synthetic media such as
deepfakes that can be used for disinformation.

Generative models learn the distribution of data from a sample dataset and
can then generate new data instances. Recent advances in deep learning has
brought forth improvements in generative model architectures, and some
state-of-the-art models can (in some cases) produce outputs realistic enough to
fool humans.

We survey recent research at the intersection of security and privacy and
generative models. In particular, we discuss the use of generative models in
adversarial machine learning, in helping automate or enhance existing attacks,
and as building blocks for defenses in contexts such as intrusion detection,
biometrics spoofing, and malware obfuscation. We also describe the use of
generative models in diverse applications such as fairness in machine learning,
privacy-preserving data synthesis, and steganography. Finally, we discuss new
threats due to generative models: the creation of synthetic media such as
deepfakes that can be used for disinformation.

By admin