I am a PhD student in Robotics Institute, CMU, where I’m advised by Prof. Oliver Kroemer.
My current research interests lie in the field of robot Learning, more specifically at the intersection of Reinforcement Learning and Imitation learning for robotics.
Previously, I completed my Masters in Robotics at CMU, during which I was advised by Prof. Kris Kitani. I did my undergraduate in Computer Science from IIIT-H, where I worked with Prof. Prasad Saripalli on virtualization systems. I have also worked as a Game Programmer at Pocket Gems and as a Software Engineer at Google.
Mohit Sharma, Kevin Zhang, Oliver Kroemer
Learn semantic embedding spaces for robot food cutting.
Kevin Zhang, Mohit Sharma, Manuela Veloso, Oliver Kroemer
We show the utility of using multimodal sensory data for robust food cutting.
Directed-Info Gail: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information
Mohit Sharma*, Arjun Sharma*, Nick Rhinehart, Kris M. Kitani
We use information-theoretic tools to show how expert demonstrations can be used to learn sub-task policies and combine them together in novel ways.
Arjun Sharma*, Mohit Sharma*, Nick Rhinehart, Kris M. Kitani
The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging. In this work, we model the interaction between sub-tasks and their resulting state-action trajectory sequences as a directed graphical model.
Mohit Sharma, Joachim R. Groeger, Robert Miller, Kris M. Kitani
We make an important connection to existing results in econometrics to describe an alternative formulation of inverse reinforcement learning (IRL). In particular, we describe an algorithm using Conditional Choice Probabilities (CCP), which are maximum likelihood estimates of the policy estimated from expert demonstrations, to solve the IRL problem.
Mohit Sharma, Dragan Ahematovic, Laszlo Jeni, Kris M. Kitani
We propose a novel Multi-Scale Deep Convolution-LSTM architecture, capable of recognizing short and long term motion patterns found in head gestures, from video data of natural and unconstrained conversations.