Description
Some compression algorithms benefit greatly from smoothness (i.e., closeness to a polynomial) of their input data. We know that this is the case for some physics-based simulations. What is unknown is whether we can constrain the parameters of a neural network in a similar fashion, thus enabling high throughput compression for distributed learning algorithms.
Your goal in this thesis is to find out whether parameter updates, or parameters themselves, can be constrained to be reasonably well approximated by a polynomial. This can happen either by using an additional loss or by quantizing parameter updates/weights. We are interested in the compression ratio, as well as the accuracy change incurred due to the additional constraints.
Note that due to the highly experimental nature of this thesis, we cannot guarantee success. Negative experiment results (i.e., it cannot be done in this way) are also completely valid, and do not stand in the way of a very good grade if they are well documented.