Replication of adversarial machine learning research

Supervisor(s)Alexander Schlögl, MSc


Machine learning (ML), and adversarial ML in particular, is a very fast moving research area. In the race to publish as much and as fast as possible, replicability and external verification is unfortunately often left by the wayside.

This is a broader category of thesis, in which you will examine one previously published paper, and try to replicate its findings. Replication includes the preprocessing of data, training of models, and execution of experiments. I will assist you with know-how, hardware, and access to data.

A completed thesis results in a clean and well-documented open-source code repository that replicates the experiments done in the paper. Claimed paper results are checked for validity, and are used as a basis for automated tests. Versions of used packages should be clearly stated, to allow for verification in the future.

The mentioned references are a starting point. If you have an interesting paper that you would like to replicate, I am sure we can agree on a thesis.


Prior experience with machine learning (ML), adversarial ML an advantage.


  • Shokri, R., Stronati, M., Song, C., and Shmatikov, V. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy (S&P). 2017, pp. 3–18.
  • Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., and Papernot, N. High Accuracy and High Fidelity Extraction of Neural Networks. In USENIX Security Symposium. USENIX, 2020, pp. 1345–1362.
  • Papernot, N., McDaniel, P., Sinha, A., and Wellman, M.P. SoK: Security and Privacy in Machine Learning. In IEEE European Symposium on Security and Privacy (EuroS&P). 2018, pp. 399–414.