Recent advances in computational power and machine learning have enabled deep learning methods to become the to-go algorithms for a variety of tasks, such as computer vision, speech recognition or malware detection.
Developers of such algorithms mainly focus on the performance of their models. However, when such models are used in security-critical use cases, malicious adversaries could attempt to exploit known vulnerabilities of machine learning models in order to perform attacks. For instance, it has been shown that deep neural networks are vulnerable to adversarial examples, inputs that might resemble genuine examples but which cause (targeted) misclassifications when fed to the network.
In these security critical domains, it is important to understand what potential attack vectors are present to ensure that machine learning algorithms are robust to adversaries.
-
People:
- Ivo Sluganovic
- Giulio Lovisotto
- Henry Turner