Description
Digital forensics deals with the scientific reconstruction of digital traces for use in a court of law.
Over the past two decades, this field has seen a significant increase in the use of machine learning.
The process of learning from digital data and subsequently organizing and representing the learned data has undergone rapid development.
Initially, forensic applications of machine learning focussed on tensor data, such as digital images.
About a decade ago, research in this field progressed towards tokenization, particularly in natural language processing.
Only recently, tabular data is considered as well.
However, the realm of forensics encompasses many other forms of structured data that remain to be fully elucidated within the framework of machine learning.
The objective of this thesis is to investigate, through the use of toy problems, which architectural elements of deep learning models are particularly well-suited for the task of learning structured data in the context of digital forensics. The student makes a justified choice of architectural elements (documenting their approach), conducts own experiments, evaluates their results, and gives recommendations for future investigations.