Hi again, should I assume it's not happening?
RT-2 (the paper you cited) is a VLA, not LLM. VLAs are what the "executor" in our diagram uses.
Hey Ted! Any updates? :)
We set it to some date in the future
Thanks! Vending-Bench v2 is going to be fire. Would love to include gpt5 <3
This is a great point. I admit I have to better understand what each model provider does behind the scenes in the API. Sad if the days of access to the model is gone.
We thought about that, but then it's not reproducible if we want to run it for new models later
Thanks, that would be great!
Thanks for highlighting our work!
I see. Thank you!