In the hKI-Chemie project, we are researching the collaboration between humans and artificial intelligence in the chemical industry. The aim is to create a reliable and robust system through a human-AI symbiosis that outperforms purely algorithmic or human decision-making. In this context, I am concerned with the explainability in machine learning based on different data sets, which are rescricted by their domain, so that common approaches like LRP are not practical.
Topics for Theses
Different approaches of explainability can be explored in theses close to the project. Experience with neural networks and machine learning, as well as the application with Keras are helpful. Further theses, also from other areas, on request.
Master Thesis: Explainability with reclassification of heatmapped multiclass images In this work, a tool should be built with which an image colored by an LRP process can be reclassified by minimal changes. The handwritten numbers of the MNIST dataset serve as test data. In order to explain a decision made by the AI, areas of the image that were relevant for the classification are colored using LRP. Through an iterative change of the image, a new image will be created, which is interpreted completely differently by the AI.