The emerging sub-field of adversarial machine learning (more precisely: machine learning in adversarial environments) has established a taxonomy of attacks that are performed during the training or inference phase of machine learning tasks and violate various protection goals. Deep neural networks (DNNs) appear particularly vulnerable to the misclassification of adversarial examples.
In order to defend neural networks against malicious attacks, recent approaches propose the use of secret keys in the training or inference pipelines of learning systems. However, the secrecy of the key is often not discussed.
In the goal of this thesis is to explore the issue for the case of a recently proposed key-based DNN. It should experimentally measure the leakage of key information under selected attacker models.