Security in Machine Learning

Have a look at our research

Security in Machine Learning

Recent advances in computational power and machine learning have enabled deep learning methods to become the to-go algorithms for a variety of tasks, such as computer vision, speech recognition or malware detection.

Developers of such algorithms mainly focus on the performance of their models. However, when such models are used in security-critical use cases, malicious adversaries could attempt to exploit known vulnerabilities of machine learning models in order to perform attacks. For instance, it has been shown that deep neural networks are vulnerable to adversarial examples, inputs that might resemble genuine examples but which cause (targeted) misclassifications when fed to the network.

In these security critical domains, it is important to understand what potential attack vectors are present to ensure that machine learning algorithms are robust to adversaries.

    People:
  • Ivo Sluganovic
  • Giulio Lovisotto
  • Henry Turner

Publications

SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations

Giulio Lovisotto, Henry Turner, Ivo Sluganovic, Martin Strohmeier, Ivan Martinovic.

30th USENIX Security Symposium (USENIX Security 21). August 2021.

Paper Github Publication
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

Giulio Lovisotto, Simon Eberz, Ivan Martinovic.

2020 IEEE European Symposium on Security and Privacy (EuroS&P). September 2020.

Paper Video Github Publication
Seeing Red: PPG Biometrics Using Smartphone Cameras

Giulio Lovisotto, Henry Turner, Simon Eberz, Ivan Martinovic.

IEEE 15th Computer Society Workshop on Biometrics. June 2020.

Paper Github Publication
Attacking Speaker Recognition Systems with Phoneme Morphing

Henry Turner, Giulio Lovisotto, Ivan Martinovic.

European Symposium on Research in Computer Security (ESORICS). 2019.

Paper Publication