Parallelizing matrix multiplication for Intel SGX

Supervisor(s)Alexander Schlögl, MSc


We have built a framework that partially black-boxes machine learning inference using Intel SGX. It has been stated in the discussion after the Foreshadow presentation that “hyperthreading [in SGX] is side-channel by design”. However, we are interested in the theoretical performance of parallelized matrix multiplication in an SGX enclave.

The project would implement matrix multiplication, parallelize it, and evaluate the performance on multiple problem sizes and numbers of threads. A comparison between regular untrusted execution and execution inside the SGX enclave would be useful to gain insights into the impact that running inside a trusted enclave has on performance. As a basis for handling the SGX compilation process the eNNclave codebase can be used.


Knowledge in parallel algorithms, no prior SGX knowledge necessary


  • Schlögl, A. and Böhme, R. eNNclave: Offline Inference with Model Confidentiality. In 13th ACM Workshop on Artificial Intelligence and Security (AISec’20). 2020. [PDF] [Video]
  • Bulck, J.V., Minkin, M., Weisse, O., et al. Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution. In USENIX Security Symposium. 2018, pp. 991–1008.