Kelvin Guu
Kelvin Guu
Senior Staff Research Scientist, Google
Verified email at - Homepage
Cited by
Cited by
Finetuned language models are zero-shot learners
J Wei, M Bosma, VY Zhao, K Guu, AW Yu, B Lester, N Du, AM Dai, QV Le
arXiv preprint arXiv:2109.01652, 2021
Retrieval augmented language model pre-training
K Guu, K Lee, Z Tung, P Pasupat, M Chang
International conference on machine learning, 3929-3938, 2020
Traversing Knowledge Graphs in Vector Space
K Guu, J Miller, P Liang
Empirical Methods in Natural Language Processing (EMNLP), 2015
Generating Sentences by Editing Prototypes
K Guu, TB Hashimoto, Y Oren, P Liang
Transactions of the Association for Computational Linguistics 6, 437-450, 2018
From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood
K Guu, P Pasupat, EZ Liu, P Liang
Association for Computational Linguistics (ACL), 2017
A Retrieve-and-Edit Framework for Predicting Structured Outputs
TB Hashimoto, K Guu, Y Oren, PS Liang
Advances in Neural Information Processing Systems, 10073-10083, 2018
Transforming question answering datasets into natural language inference datasets
D Demszky, K Guu, P Liang
arXiv preprint arXiv:1809.02922, 2018
Promptagator: Few-shot dense retrieval from 8 examples
Z Dai, VY Zhao, J Ma, Y Luan, J Ni, J Lu, A Bakalov, K Guu, KB Hall, ...
arXiv preprint arXiv:2209.11755, 2022
Reinforcement learning on web interfaces using workflow-guided exploration
EZ Liu, K Guu, P Pasupat, T Shi, P Liang
arXiv preprint arXiv:1802.08802, 2018
Rarr: Researching and revising what language models say, using language models
L Gao, Z Dai, P Pasupat, A Chen, AT Chaganty, Y Fan, VY Zhao, N Lao, ...
arXiv preprint arXiv:2210.08726, 2022
Pretraining with contrastive sentence objectives improves discourse performance of language models
D Iter, K Guu, L Lansing, D Jurafsky
arXiv preprint arXiv:2005.10389, 2020
Light timeout optimization
X Gu, K Gu, D Nulu
US Patent 8,538,596, 2013
Kermit: Generative insertion-based modeling for sequences
W Chan, N Kitaev, K Guu, M Stern, J Uszkoreit
arXiv preprint arXiv:1906.01604, 2019
Neurips 2020 efficientqa competition: Systems, analyses and lessons learned
S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ...
NeurIPS 2020 Competition and Demonstration Track, 86-111, 2021
Unlocking compositional generalization in pre-trained models using intermediate representations
J Herzig, P Shaw, MW Chang, K Guu, P Pasupat, Y Zhang
arXiv preprint arXiv:2104.07478, 2021
Towards tracing factual knowledge in language models back to the training data
E Akyürek, T Bolukbasi, F Liu, B Xiong, I Tenney, J Andreas, K Guu
arXiv preprint arXiv:2205.11482, 2022
Neural data augmentation via example extrapolation
K Lee, K Guu, L He, T Dozat, HW Chung
arXiv preprint arXiv:2102.01335, 2021
Dialog inpainting: Turning documents into dialogs
Z Dai, AT Chaganty, VY Zhao, A Amini, QM Rashid, M Green, K Guu
International conference on machine learning, 4558-4586, 2022
Mapping natural language commands to web elements
P Pasupat, TS Jiang, EZ Liu, K Guu, P Liang
arXiv preprint arXiv:1808.09132, 2018
Controllable semantic parsing via retrieval augmentation
P Pasupat, Y Zhang, K Guu
arXiv preprint arXiv:2110.08458, 2021
The system can't perform the operation now. Try again later.
Articles 1–20