I research methods to make robots solve tasks robustly, exhibit intrinsic motivation, and agency. Currently I am focusing on Causal World Models, Intrinsic Motivation, and Reinforcement Learning for Robotics at the University of Tübingen and the Max-Planck Institute for Intelligent Systems. Before this I have worked as machine learning engineer and software engineer.

Here’s my updated resume. You can also find my sporadically updated blog posts here.

Send me an email if you wish to collaborate with me or discuss anything interesting!

Selected Publications

  • One Does Not Simply Estimate State: Comparing Model-based and Model-free Reinforcement Learning on the Partially Observable MordorHike Benchmark
    Sai Prasanna, André Biedenkapp, and Raghu Rajan
    Eighteenth European Workshop on Reinforcement Learning (EWRL 2025)

    CODE LINK
  • Dreaming of Many Worlds: Learning Contextual World Models Aids Zero-Shot Generalization
    Sai Prasanna*, Karim Farid*, Raghu Rajan, and André Biedenkapp
    Reinforcement Learning Journal, vol. 3, 2024, pp. 1317–1350. Presented at RLC 2024 and accepted to EWRL 2024.

    CODE ARXIV LINK
  • Perception Matters: Enhancing Embodied AI with Uncertainty-Aware Semantic Segmentation
    Sai Prasanna*, Daniel Honerkamp*, Kshitij Sirohi*, Tim Welschehold, Wolfram Burgard, and Abhinav Valada
    ISRR 2024

    CODE ARXIV LINK
  • When BERT plays the lottery, all tickets are winning
    Sai Prasanna, Anna Rogers, and Anna Rumshisky
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3208–3229, Online. Association for Computational Linguistics.

    CODE ARXIV LINK
  • Zoho at SemEval-2019 Task 9: Semi-supervised Domain Adaptation using Tri-training for Suggestion Mining
    Sai Prasanna and Sri Ananda Seelan
    In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 1282–1286, Minneapolis, Minnesota, USA. Association for Computational Linguistics.

    CODE ARXIV LINK