I'm a postdoc at UC Berkeley doing deep learning research in Pieter Abbeel's group. My primary focus areas are unsupervised learning and reinforcement learning. Before this, I founded a venture-backed startup (YC W17, F30<30) and before that got a PhD in theoretical physics from UChicago where I was a Bloomenthal Fellow.

Links: Twitter, Google Scholar, Email


Blog posts

Multi-head Attention, GPT, and BERT

Efficient Patch Extraction







In a series of papers on representation learning (CURL, RAD, ATC) we showed that RL from pixels can be as efficient as RL from state and even learn real-robot control policies from pixels in just 30 mins of training (FERM). These days I work on self-supervised exploration and skill extraction.


* indicates equal contribution

Hierarchical Few-Shot Imitation with Skill Transition Models Kourosh Hakhamaneshi*, Ruihan Zhao*, Albert Zhan*, Pieter Abbeel, Michael Laskin, 2021

Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL, Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin, 2021

A Framework for Efficient Robotic Manipulation Albert Zhan*, Philip Zhao*, Lerrel Pinto, Pieter Abbeel, Michael Laskin, 2021


URLB: Unsupervised Reinforcement Learning Benchmark Michael Laskin*, Denis Yarats*, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, Pieter Abbeel, NeurIPS, 2021

Reinforcement Learning with Latent Flow Wenling Shang*, Xiaofei Wang*, Aravind Srinivas, Aravind Rajeswaran, Yang Gao, Pieter Abbeel, Michael Laskin, NeurIPS, 2021

Decision Transformer: Reinforcement Learning via Sequence Modeling Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch, NeurIPS, 2021

Hosted at Hostnotion – custom domains for Notion