-
One Does Not Simply Estimate State: Comparing Model-based and Model-free Reinforcement Learning on the Partially Observable MordorHike BenchmarkEighteenth European Workshop on Reinforcement Learning (EWRL 2025)
CODE LINK -
Dreaming of Many Worlds: Learning Contextual World Models Aids Zero-Shot GeneralizationReinforcement Learning Journal, vol. 3, 2024, pp. 1317–1350. Presented at RLC 2024 and accepted to EWRL 2024.
CODE ARXIV LINK -
Perception Matters: Enhancing Embodied AI with Uncertainty-Aware Semantic SegmentationISRR 2024
CODE ARXIV LINK -
When BERT plays the lottery, all tickets are winningIn Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3208–3229, Online. Association for Computational Linguistics.
CODE ARXIV LINK -
Zoho at SemEval-2019 Task 9: Semi-supervised Domain Adaptation using Tri-training for Suggestion MiningIn Proceedings of the 13th International Workshop on Semantic Evaluation, pages 1282–1286, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
CODE ARXIV LINK
I research methods to make robots solve tasks robustly, exhibit intrinsic motivation, and agency. Currently I am focusing on Causal World Models, Intrinsic Motivation, and Reinforcement Learning for Robotics at the University of Tübingen and the Max-Planck Institute for Intelligent Systems. Before this I have worked as machine learning engineer and software engineer.
Here’s my updated resume. You can also find my sporadically updated blog posts here.
Send me an email if you wish to collaborate with me or discuss anything interesting!