Follow
Ishita Dasgupta
Ishita Dasgupta
Senior Research Scientist, DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
21832023
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ...
arXiv preprint arXiv:2403.05530, 2024
6842024
Can language models learn from explanations in context?
AK Lampinen, I Dasgupta, SCY Chan, K Matthewson, MH Tessler, ...
Findings of the Association for Computational Linguistics: EMNLP 2022, 537--563, 2022
2612022
Are Convolutional Neural Networks or Transformers more like human vision?
S Tuli, I Dasgupta, E Grant, TL Griffiths
Proceedings of the Annual Meeting of the Cognitive Science Society 43 (43), 2021
2312021
Language models show human-like content effects on reasoning
I Dasgupta, AK Lampinen, SCY Chan, A Creswell, D Kumaran, ...
arXiv preprint arXiv:2207.07051 2 (3), 2022
1682022
Where do hypotheses come from?
I Dasgupta, E Schulz, SJ Gershman
Cognitive psychology 96, 1-25, 2017
1472017
Evaluating compositionality in sentence embeddings
I Dasgupta, D Guo, A Stuhlmüller, SJ Gershman, ND Goodman
Proceedings of the Annual Meeting of the Cognitive Science Society 40 (40), 2018
1412018
Causal reasoning from meta-reinforcement learning
I Dasgupta, J Wang, S Chiappa, J Mitrovic, P Ortega, D Raposo, ...
arXiv preprint arXiv:1901.08162, 2019
1332019
A theory of learning to infer.
I Dasgupta, E Schulz, JB Tenenbaum, SJ Gershman
Psychological review 127 (3), 412, 2020
892020
Remembrance of inferences past: Amortization in human hypothesis generation
I Dasgupta, E Schulz, ND Goodman, SJ Gershman
Cognition 178, 67-81, 2018
802018
Memory as a computational resource
I Dasgupta, SJ Gershman
Trends in cognitive sciences 25 (3), 240-251, 2021
652021
Pivot: Iterative visual prompting elicits actionable knowledge for vlms
S Nasiriany, F Xia, W Yu, T Xiao, J Liang, I Dasgupta, A Xie, D Driess, ...
arXiv preprint arXiv:2402.07872, 2024
612024
Collaborating with language models for embodied reasoning
I Dasgupta, C Kaeser-Chen, K Marino, A Ahuja, S Babayan, F Hill, ...
NeurIPS 2022 Language and Reinforcement Learning Workshop, 2022
592022
Tell me why! explanations support learning relational and causal structure
AK Lampinen, N Roy, I Dasgupta, SCY Chan, A Tam, J Mcclelland, C Yan, ...
International Conference on Machine Learning, 11868-11890, 2022
402022
Transformers generalize differently from information stored in context vs in weights
SCY Chan, I Dasgupta, J Kim, D Kumaran, AK Lampinen, F Hill
Memory in Artificial and Real Intelligence (MemARI) workshop NeurIPS 2022, 2022
382022
Using natural language and program abstractions to instill human inductive biases in machines
S Kumar, CG Correa, I Dasgupta, R Marjieh, MY Hu, R Hawkins, ...
Advances in Neural Information Processing Systems 35, 167-180, 2022
372022
Meta-learned models of cognition
M Binz, I Dasgupta, AK Jagadish, M Botvinick, JX Wang, E Schulz
Behavioral and Brain Sciences 47, e147, 2024
362024
Meta-Learning of Structured Task Distributions in Humans and Machines
S Kumar, I Dasgupta, JD Cohen, ND Daw, TL Griffiths
International Conference on Learning Representations, 2021, 2020
28*2020
A buried ionizable residue destabilizes the native state and the transition state in the folding of monellin
N Aghera, I Dasgupta, JB Udgaonkar
Biochemistry 51 (45), 9058-9066, 2012
252012
Passive learning of active causal strategies in agents and language models
A Lampinen, S Chan, I Dasgupta, A Nam, J Wang
Advances in Neural Information Processing Systems 36, 2024
202024
The system can't perform the operation now. Try again later.
Articles 1–20