Amortizing SGX setup times through batched inference

Supervisor(s)Alexander Schlögl, MSc


We have built a framework that partially black-boxes machine learning inference using Intel SGX. Our performance evaluation has shown that for black-boxed dense layers, the setup time massively outweighs the execution time. This leads us to believe a useful next step would be enabling batched inference which would let us amortize the setup time over multiple inputs.

Your thesis would consist of understanding the existing code base available on github, and extending it to support batched inference. A performance evaluation testing whether execution time varies between runs, and how well we can amortize setup time over multiple runs would be part of the thesis as well.


Machine learning knowledge, knowledge of C/C++


  • Schlögl, A. and Böhme, R. eNNclave: Offline Inference with Model Confidentiality. In 13th ACM Workshop on Artificial Intelligence and Security (AISec’20). 2020. [PDF] [Video]