Hi, I’m Ori! I’m a graduate student at the University of Tübingen and the International Max Planck Research School for Intelligent Systems (IMPRS-IS), working in Matthias Bethge’s lab.
I’m interested in closing the gap between how machine learning models perform in known benchmarks versus their performance in more complicated real-world scenarios.
Previously, I graduated from Tel Aviv University with an BSc in Mathematics and an MSc in Computer Science, advised by Lior Wolf.
I’m part of the Freie Wissenschaftliche Vereinigung. My brother Ofir Press is a machine learning researcher.
Papers
SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?
John Yang*, Carlos E. Jimenez*, Alex L. Zhang, Kilian Lieret, Joyce Yang, Xindi Wu, Ori Press, Niklas Muennighoff, Gabriel Synnaeve, Karthik Narasimhan, Diyi Yang, Sida I. Wang, Ofir Press
Preprint, 2024
[paper] [website]
CiteME: Can Language Models Accurately Cite Scientific Claims?
Ori Press*, Andreas Hochlehnert*, Ameya Prabhu, Vishaal Udandarao, Ofir Press‡, Matthias Bethge‡ (*/‡ shared first/last authorship)
Neural Information Processing Systems, 2024
[paper] [code] [website]
The Entropy Enigma: Success and Failure of Entropy Minimization
Ori Press, Ravid Shwartz-Ziv, Yann LeCun, Matthias Bethge
International Conference on Machine Learning, 2024
[paper] [code] [bib]
RDumb: A simple approach that questions our progress in continual test-time adaptation
Ori Press, Steffen Schneider, Matthias Kümmerer, Matthias Bethge
[paper] [code] [bib]
Neural Information Processing Systems, 2023
Parts of the paper were accepted in the following workshops:
Shift Happens ‘22 @ ICML
Principles of Distribution Shift ‘22 @ ICML
Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling
Akis Linardos*, Matthias Kümmerer*, Ori Press, Matthias Bethge
International Conference on Computer Vision, 2021
[paper] [bib]
Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer
Ori Press, Tomer Galanti, Sagie Benaim, Lior Wolf
International Conference on Learning Representations, 2019
[paper] [code] [bib]