We have built a framework that partially black-boxes machine learning inference using Intel SGX. It has been stated in the discussion after the Foreshadow presentation that “hyperthreading [in SGX] is side-channel by design”. However, we are interested in the theoretical performance of parallelized matrix multiplication in an SGX enclave.
The project would implement matrix multiplication, parallelize it, and evaluate the performance on multiple problem sizes and numbers of threads. A comparison between regular untrusted execution and execution inside the SGX enclave would be useful to gain insights into the impact that running inside a trusted enclave has on performance.