Facebook AI Research
abhshkdz at fb dot com
- [Feb 22] Runner-up for the 2020 AAAI/ACM SIGAI Doctoral Dissertation Award.
- [Mar 21] Awarded the Georgia Tech Sigma Xi Best PhD Thesis Award.
- [Mar 21] Awarded the Georgia Tech College of Computing Dissertation Award.
- [Nov 20] The Open Catalyst Project was covered by Fortune, Engadget, CNBC, VentureBeat.
- [Nov 20] Organizing the 4th Visually-Grounded Interaction & Language Workshop at NAACL.
- [July 20] Presenting Probing Emergent Semantics in Predictive Agents at ICML 2020 (Video).
- [Mar 20] I completed my PhD! My thesis, “Building agents that can see, talk, and act”, is here.
- [Nov 19] Organizing the Visual Question Answering and Dialog workshop at CVPR 2020.
- [Sep 19] Organizing the Visually-Grounded Interaction & Language Workshop at NeurIPS.
- [Jun 19] Presenting Targeted Multi-Agent Communication as an oral at ICML 2019 (Video).
- [Mar 19] Co-founded Caliper. Caliper helps recruiters evaluate practical AI skills.
- [Feb 19] My work was featured in this wonderful article by Georgia Tech.
- [Jan 19] Awarded the Facebook Graduate Fellowship.
- [Jan 19] Awarded the Microsoft Research PhD Fellowship (declined).
- [Jan 19] Awarded the NVIDIA Graduate Fellowship (declined).
- [Jan 19] Organizing the 2nd Visual Dialog Challenge.
- [Oct 18] Presenting Neural Modular Control for Embodied QA at CoRL 2018 (Video).
- [Sep 18] Presenting results and analysis of the 1st Visual Dialog Challenge at ECCV 2018.
- [Jul 18] Presenting a tutorial on Connecting Language and Vision to Actions at ACL 2018.
- [Jun 18] Organizing the 1st Visual Dialog Challenge.
- [Jun 18] Presenting Embodied Question Answering as an oral at CVPR 2018 (Video).
- [Jun 18] Organizing the VQA Challenge and Visual Dialog Workshop at CVPR 2018.
- [Mar 18] Speaking on Embodied Question Answering at NVIDIA GTC (Video).
- [Dec 17] Awarded the Adobe Research Fellowship. (Department’s news story)
- [Dec 17] Awarded the Snap Inc. Research Fellowship. (Department’s news story)
- [Oct 17] Presenting Cooperative Visual Dialog Agents as an oral at ICCV 2017 (Video).
- [Jul 17] Presenting Visual Dialog at the VQA Challenge Workshop, CVPR 2017 (Video).
- [Jul 17] Presenting our paper on Visual Dialog as a spotlight at CVPR 2017 (Video).
I am a Research Scientist at Facebook AI Research (FAIR) working on deep neural networks and its applications in climate change. My current focus is on electrocatalyst discovery for renewable energy storage as part of the Open Catalyst Project. Renewable energy sources (like solar, wind) are great but intermittent – the sun shines only during the day. During the evening, we fall back on fossil fuels. To avoid this, we need to discover cheap, scalable ways of converting electricity from renewable sources to storable forms, so that we can transfer it from times of peak generation to peak demand. AI can help accelerate the chemical simulations needed to make these discoveries.
Before this, I was a Computer Science PhD student at Georgia Tech, advised by Dhruv Batra, and working closely with Devi Parikh, where I focused on developing artificial agents that can see (computer vision), talk (language modeling), and act (reinforcement learning).
During my PhD, I interned thrice at Facebook AI Research — Summer 2017 and Spring 2018 at Menlo Park, working with Georgia Gkioxari, Devi Parikh and Dhruv Batra on training embodied agents for navigation and question-answering in simulated environments (see embodiedqa.org), and Summer 2018 at Montréal, working with Mike Rabbat and Joelle Pineau on communication protocols in multi-agent reinforcement learning. In 2019, I interned at DeepMind in London working on grounded language learning with Felix Hill, Laura Rimell, and Stephen Clark, and at Tesla Autopilot in Palo Alto working on differentiable neural architecture search with Andrej Karpathy.
I got my Bachelor’s at IIT Roorkee in 2015. During my undergrad, I took part in Google Summer of Code (2013 and 2014), won several competitions (Yahoo! HackU!, Microsoft Code.Fun.Do., Deloitte CCTC 2013 and 2014), and owe most of my programming/tinkering bent to SDSLabs.
On the side, I built aideadlin.es (countdowns to a bunch of CV/NLP/ML/AI conference deadlines) and aipaygrad.es (statistics of industry job offers in AI), neural-vqa and its extension neural-vqa-attention, HackFlowy, graf, Erdős, etc. I also occasionally dabble in generative art. I like this map tracking the places I’ve been to. Blog posts from a previous life.
How Do Graph Networks Generalize to Large and Diverse Molecular Systems?
Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale
Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations
Rotation Invariant Graph Neural Networks using Spin Convolutions
Automated Video Description for Blind and Low Vision Users
CHI EA 2021
Auxiliary Tasks and Exploration Enable ObjectNav
ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations
ICLR 2021 Deep Learning for Simulation Workshop
The Open Catalyst 2020 (OC20) Dataset and Community Challenges
ACS Catalysis 2021
An Introduction to Electrocatalyst Design using Machine Learning for Renewable Energy Storage
Auxiliary Tasks Speed Up Learning PointGoal Navigation
Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline
Building agents that can see, talk, and act
AAAI/ACM SIGAI Doctoral Dissertation Award, Runner-up
Georgia Tech Sigma Xi Best PhD Thesis Award
Georgia Tech College of Computing Dissertation Award
Probing Emergent Semantics in Predictive Agents via Question Answering
Feel The Music: Automatically Generating A Dance For An Input Song
IR-VIC: Unsupervised Discovery of Sub-goals for Transfer in RL
IJCAI-PRICAI 2020, ICLR 2019 Task-Agnostic RL Workshop
Improving Generative Visual Dialog by Answering Diverse Questions
TarMAC: Targeted Multi-Agent Communication
Embodied Question Answering in Photorealistic Environments with Point Clouds
CVPR 2019 (Oral)
Audio-Visual Scene-Aware Dialog
End-to-end Audio Visual Scene-Aware Dialog Using Multimodal Attention-based Video Features
Neural Modular Control for Embodied Question Answering
CoRL 2018 (Spotlight)
Embodied Question Answering
CVPR 2018 (Oral)
Evaluating Visual Conversational Agents via Cooperative Human-AI Games
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
ICCV 2017 (Oral)
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
IJCV 2019, ICCV 2017, NIPS 2016 Interpretable ML for Complex Systems Workshop
PAMI 2018, CVPR 2017 (Spotlight)
Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?
CVIU 2017, EMNLP 2016, ICML 2016 Workshop on Visualization for Deep Learning
AirMaps was a fun hackathon project that lets users navigate through Google Earth with gestures and speech commands using a Kinect sensor. It was the winning entry in Microsoft Code.Fun.Do.
Another fun hackathon-winning project built during Yahoo! HackU! 2012 that involves webRTC-based P2P video chat, and was faster than any other video chat provider (at the time, before Google launched Hangouts).
Ugly-looking, but super-effective bash script for downloading entire playlists from 8tracks. (Still works as of 10/2016).