Priya Goyal

Staff AI Research Engineer, Google Deepmind
Google Scholar
LinkedIn
Twitter
Email
Github

I am a Staff AI Researcher at Google Deepmind since August 2022 where I work on building fundamental Multimodal technology aimed at enabling new capabilities with LLMs. Previously, I spent almost 7 wonderful years at Facebook AI Research (FAIR) in New York, USA where I worked on computer vision and machine learning.

At FAIR, I led research on representation learning using self-supervision from uncurated datasets, training large-scale computer vision models (notable projects: ImageNet in 1 Hour and SEER 10 billion parameters self-supervised model) and building socially responsible AI models. I led the development of self-supervised learning library VISSL and am a recipient of the Best Student Paper Award for Focal Loss at ICCV 2017. I also led and organized the first ever self-supervised learning challenge at ICCV'19.

My research interests include multimodal learning, computer vision, retrieval augmentation, personalized AI and developing socially responsible AI.

Talks / Media coverage

TechCrunch article on ImageNet in 1-Hour.
CNBC article on SEER (training A.I. to "see").
NVIDIA Developer on ImageNet on 1-Hour.
Geekwire on ImageNet in 1-Hour.
NVIDIA Developer on Self-supervised learning beating SOTA Computer vision models.
WIRED article on AI Teaching Itself to See With Less Human Help.
CNET on training computers to learn like humans do.
ImageNet in 1-Hour at NeurIPS 2017 Supercomputing workshop.

Publications

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
arXiv, 2022
Priya Goyal, Quentin Duval, Isaac Seessel, Mathilde Caron, Ishan Misra, Levent Sagun, Armand Joulin, Piotr Bojanowski
[arXiv] [blogpost] [code] [bib]

Fairness Indicators for Systematic Assessments of Visual Feature Extractors
FAccT, 2022
Priya Goyal, Adriana Romero Soriano, Caner Hazirbas, Levent Sagun, Nicolas Usunier
[arXiv] [blogpost] [code] [bib]

A Self-Supervised Descriptor for Image Copy Detection
arXiv, 2022
Ed Pizzi, Sreya Dutta Roy, Sugosh Nagavara Ravindra, Priya Goyal, Matthijs Douze
[arXiv] [bib]

Fully Sharded Data Parallel: faster AI training with fewer GPUs
Facebook Engineering blog, 2021
Myle Ott, Sam Shleifer, Min Xu, Priya Goyal, Quentin Duval, Vittorio Caggiano
[blog] [docs]

Self-supervised pretraining of visual features in the wild
arXiv, 2021
Priya Goyal, Mathilde Caron, Benjamin Lefaudeux, Min Xu, Pengchao Wang, Vivek Pai, Mannat Singh, Vitaliy Liptchinsky, Ishan Misra, Armand Joulin, Piotr Bojanowski
[arXiv] [blogpost] [code] [bib]

VISSL: A library for state-of-the-art self-supervised learning from images
Released Jan'2021
Priya Goyal, Quentin Duval, Jeremy Reizenstein, Matthew Leavitt, Min Xu, Benjamin Lefaudeux, Mannat Singh, Vinicius Reis, Mathilde Caron, Piotr Bojanowski, Armand Joulin, Ishan Misra
[website] [tutorials] [Github] [Docs] [bib]

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
NeurIPS 2020
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin
[arXiv] [blogpost] [code] [bib]

Scaling and Benchmarking Self-Supervised Visual Representation Learning
ICCV 2019
Priya Goyal, Dhruv Mahajan, Abhinav Gupta*, Ishan Misra*
[arXiv] [code] [bib]

Focal Loss for Dense Object Detection
ICCV 2017 (best student paper award)
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollár
[arXiv] [bib]

Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
arXiv 2017
Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He
[arXiv] [NeurIPS 2017 talk]

Resume

PDF