I am a Founding Member of DatologyAI. Previously, I was a Staff AI Researcher at Google Deepmind where I worked on building fundamental Multimodal technology aimed at enabling new capabilities with LLMs. Before that, I spent almost 7 wonderful years at Facebook AI Research (FAIR) in New York, USA where I worked on computer vision and machine learning.
At FAIR, I led research on representation learning using self-supervision from uncurated datasets, training large-scale computer vision models (notable projects: ImageNet in 1 Hour and SEER 10 billion parameters self-supervised model) and building socially responsible AI models. I led the development of self-supervised learning library VISSL and am a recipient of the Best Student Paper Award for Focal Loss at ICCV 2017. I also led and organized the first ever self-supervised learning challenge at ICCV'19.
My research interests include language modeling, evaluations for language models, data curation / filtering, multimodal learning, computer vision, retrieval augmentation, personalized AI and developing socially responsible AI.
TechCrunch article on ImageNet in 1-Hour.
CNBC article on SEER (training A.I. to "see").
NVIDIA Developer on ImageNet on 1-Hour.
Geekwire on ImageNet in 1-Hour.
NVIDIA Developer on Self-supervised learning beating SOTA Computer vision models.
WIRED article on AI Teaching Itself to See With Less Human Help.
CNET on training computers to learn like humans do.
ImageNet in 1-Hour at NeurIPS 2017 Supercomputing workshop.
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
arXiv, 2022
Priya Goyal, Quentin Duval, Isaac Seessel, Mathilde Caron, Ishan Misra, Levent Sagun, Armand Joulin, Piotr Bojanowski
[arXiv]
[blogpost]
[code]
[bib]
Fairness Indicators for Systematic Assessments of Visual Feature Extractors
FAccT, 2022
Priya Goyal, Adriana Romero Soriano, Caner Hazirbas, Levent Sagun, Nicolas Usunier
[arXiv]
[blogpost]
[code]
[bib]
Self-supervised pretraining of visual features in the wild
arXiv, 2021
Priya Goyal, Mathilde Caron, Benjamin Lefaudeux, Min Xu, Pengchao Wang, Vivek Pai, Mannat Singh, Vitaliy Liptchinsky, Ishan Misra, Armand Joulin, Piotr Bojanowski
[arXiv]
[blogpost]
[code]
[bib]
VISSL: A library for state-of-the-art self-supervised learning from images
Released Jan'2021
Priya Goyal, Quentin Duval, Jeremy Reizenstein, Matthew Leavitt, Min Xu, Benjamin Lefaudeux, Mannat Singh, Vinicius Reis, Mathilde Caron, Piotr Bojanowski, Armand Joulin, Ishan Misra
[website]
[tutorials]
[Github]
[Docs]
[bib]
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
NeurIPS 2020
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin
[arXiv]
[blogpost]
[code]
[bib]
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
arXiv 2017
Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He
[arXiv]
[NeurIPS 2017 talk]