Brian Ichter
Brian Ichter
Research Scientist, Google Brain
Verified email at - Homepage
Cited by
Cited by
Chain of thought prompting elicits reasoning in large language models
J Wei, X Wang, D Schuurmans, M Bosma, B Ichter, F Xia, E Chi, Q Le, ...
arXiv preprint arXiv:2201.11903, 2022
Do as i can, not as i say: Grounding language in robotic affordances
M Ahn, A Brohan, N Brown, Y Chebotar, O Cortes, B David, C Finn, C Fu, ...
arXiv preprint arXiv:2204.01691, 2022
Palm-e: An embodied multimodal language model
D Driess, F Xia, MSM Sajjadi, C Lynch, A Chowdhery, B Ichter, A Wahid, ...
arXiv preprint arXiv:2303.03378, 2023
Inner monologue: Embodied reasoning through planning with language models
W Huang, F Xia, T Xiao, H Chan, J Liang, P Florence, A Zeng, J Tompson, ...
arXiv preprint arXiv:2207.05608, 2022
Rt-1: Robotics transformer for real-world control at scale
A Brohan, N Brown, J Carbajal, Y Chebotar, J Dabis, C Finn, ...
arXiv preprint arXiv:2212.06817, 2022
Code as policies: Language model programs for embodied control
J Liang, W Huang, F Xia, P Xu, K Hausman, B Ichter, P Florence, A Zeng
2023 IEEE International Conference on Robotics and Automation (ICRA), 9493-9500, 2023
Rt-2: Vision-language-action models transfer web knowledge to robotic control
A Brohan, N Brown, J Carbajal, Y Chebotar, X Chen, K Choromanski, ...
arXiv preprint arXiv:2307.15818, 2023
Socratic models: Composing zero-shot multimodal reasoning with language
A Zeng, M Attarian, B Ichter, K Choromanski, A Wong, S Welker, ...
arXiv preprint arXiv:2204.00598, 2022
Learning sampling distributions for robot motion planning
B Ichter, J Harrison, M Pavone
arXiv preprint arXiv:1709.05448, 2018
Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action
D Shah, B Osiński, B Ichter, S Levine
Conference on Robot Learning, 492-504, 2023
Robot motion planning in learned latent spaces
B Ichter, M Pavone
IEEE Robotics and Automation Letters 4 (3), 2407-2414, 2019
Deterministic sampling-based motion planning: Optimality, complexity, and performance
L Janson, B Ichter, M Pavone
The International Journal of Robotics Research 37 (1), 46-61, 2018
Language to rewards for robotic skill synthesis
W Yu, N Gileadi, C Fu, S Kirmani, KH Lee, MG Arenas, HTL Chiang, ...
arXiv preprint arXiv:2306.08647, 2023
Open x-embodiment: Robotic learning datasets and rt-x models
A Padalkar, A Pooley, A Jain, A Bewley, A Herzog, A Irpan, A Khazatsky, ...
arXiv preprint arXiv:2310.08864, 2023
Learning language-conditioned robot behavior from offline data and crowd-sourced annotation
S Nair, E Mitchell, K Chen, B Ichter, S Savarese, C Finn
Conference on Robot Learning, 1303-1315, 2022
Open-vocabulary queryable scene representations for real world planning
B Chen, F Xia, B Ichter, K Rao, K Gopalakrishnan, MS Ryoo, A Stone, ...
2023 IEEE International Conference on Robotics and Automation (ICRA), 11509 …, 2023
Large language models as general pattern machines
S Mirchandani, F Xia, P Florence, B Ichter, D Driess, MG Arenas, K Rao, ...
arXiv preprint arXiv:2307.04721, 2023
Scaling robot learning with semantically imagined experience
T Yu, T Xiao, A Stone, J Tompson, A Brohan, S Wang, J Singh, C Tan, ...
arXiv preprint arXiv:2302.11550, 2023
Learned critical probabilistic roadmaps for robotic motion planning
B Ichter, E Schmerling, TWE Lee, A Faust
2020 IEEE International Conference on Robotics and Automation (ICRA), 9535-9541, 2020
Grounded decoding: Guiding text generation with grounded models for robot control
W Huang, F Xia, D Shah, D Driess, A Zeng, Y Lu, P Florence, I Mordatch, ...
arXiv preprint arXiv:2303.00855, 2023
The system can't perform the operation now. Try again later.
Articles 1–20