We are looking for a highly motivated PhD student to conduct research at the intersection of machine learning, visualization, and democracy and decision making.
The position is part of the project Interpretability and Explainability as Drivers to Democracy funded by the WWTF. The project is about machine learning models that are used for making decisions with significant societal impact in democratic societies. In particular, the project’s aim is to enable informed evaluation of machine learning models and their usage by the electorate through interpretability and explainability of the used models and communication of these models and their roles in the decision making process in suitable forms. The underlying decision making process is considered to involve different stakeholders with different levels of expertise and achieving the project’s aims requires the development of novel machine learning models, visualization approaches, and guidelines.
Master in Computer Science, Mathematics or Statistics
- excellent basic knowledge of machine learning & visualization
- team player with interest in interdisciplinary research
- ability to work independently and reliably
Desirable qualifications are:
- solid knowledge and interest in probabilistic machine learning models, interpretable and explainable machine learning models, visualization
- good programming skills (Python and a deep learning framework like PyTorch or Tensorflow)
- practical experience in the realization of machine learning projects
- previous publications are a strong plus
Working on this project you would be supervised by Ass. Prof. Sebastian Tschiatschek (Machine Learning) and Prof. Torsten Möller (Visualization & Data Science), and closely collaborate with Prof. Mark Coeckelbergh (Philosophy & Ethics).
Further information / contact
If you are interested in the position or have any questions, please don’t hesitate to get in touch.