Follow
Sara Hooker
Sara Hooker
Head of Cohere For AI
Verified email at cohere.com - Homepage
Title
Cited by
Cited by
Year
A benchmark for interpretability methods in deep neural networks
S Hooker, D Erhan, PJ Kindermans, B Kim
Advances in neural information processing systems 32, 2019
839*2019
The state of sparsity in deep neural networks
T Gale, E Elsen, S Hooker
arXiv preprint arXiv:1902.09574, 2019
8192019
The (un) reliability of saliency methods
PJ Kindermans, S Hooker, J Adebayo, M Alber, KT Schütt, S Dähne, ...
Explainable AI: Interpreting, explaining and visualizing deep learning, 267-280, 2019
7882019
Toward trustworthy AI development: mechanisms for supporting verifiable claims
M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ...
arXiv preprint arXiv:2004.07213, 2020
4022020
The hardware lottery
S Hooker
Communications of the ACM 64 (12), 58-65, 2021
2302021
What do compressed deep neural networks forget?
S Hooker, A Courville, G Clark, Y Dauphin, A Frome
WHI ICML 2019, 2019
211*2019
Moving beyond “algorithmic bias is a data problem”
S Hooker
Patterns 2 (4), 2021
1972021
Characterising bias in compressed models
S Hooker, N Moorosi, G Clark, S Bengio, E Denton
arXiv preprint arXiv:2010.03058, 2020
1862020
Estimating example difficulty using variance of gradients
C Agarwal, D D'souza, S Hooker
IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) 2022, 2020
1032020
Frontier AI regulation: Managing emerging risks to public safety
M Anderljung, J Barnhart, A Korinek, J Leung, C O'Keefe, J Whittlestone, ...
arXiv preprint arXiv:2307.03718, 2023
952023
Efficient methods for natural language processing: A survey
M Treviso, JU Lee, T Ji, B Aken, Q Cao, MR Ciosici, M Hassid, K Heafield, ...
Transactions of the Association for Computational Linguistics 11, 826-860, 2023
882023
Evaluating the social impact of generative ai systems in systems and society
I Solaiman, Z Talat, W Agnew, L Ahmad, D Baker, SL Blodgett, C Chen, ...
arXiv preprint arXiv:2306.05949, 2023
812023
Randomness in neural network training: Characterizing the impact of tooling
D Zhuang, X Zhang, S Song, S Hooker
Proceedings of Machine Learning and Systems 4, 316-336, 2022
762022
The goldilocks of pragmatic understanding: Fine-tuning strategy matters for implicature resolution by llms
L Ruis, A Khan, S Biderman, S Hooker, T Rocktäschel, E Grefenstette
Advances in Neural Information Processing Systems 36, 2024
59*2024
Aya model: An instruction finetuned open-access multilingual language model
A Üstün, V Aryabumi, ZX Yong, WY Ko, D D'souza, G Onilude, N Bhandari, ...
arXiv preprint arXiv:2402.07827, 2024
592024
Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning
T Zadouri, A Üstün, A Ahmadian, B Ermiş, A Locatelli, S Hooker
arXiv preprint arXiv:2309.05444, 2023
582023
When less is more: Investigating data pruning for pretraining llms at scale
M Marion, A Üstün, L Pozzobon, A Wang, M Fadaee, S Hooker
arXiv preprint arXiv:2309.04564, 2023
502023
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
O Ahia, J Kreutzer, S Hooker
Findings of EMNLP 2021, 2021
472021
The data provenance initiative: A large scale audit of dataset licensing & attribution in ai
S Longpre, R Mahari, A Chen, N Obeng-Marnu, D Sileo, W Brannon, ...
arXiv preprint arXiv:2310.16787, 2023
40*2023
Aya dataset: An open-access collection for multilingual instruction tuning
S Singh, F Vargus, D Dsouza, BF Karlsson, A Mahendiran, WY Ko, ...
arXiv preprint arXiv:2402.06619, 2024
362024
The system can't perform the operation now. Try again later.
Articles 1–20