I’m Samuel Pfrommer, a third-year PhD student at Berkeley EECS working with Somayeh Sojoudi. My research interests span robust machine learning, safe reinforcement learning, and geometric deep learning.
Initial State Interventions for Deconfounded Imitation Learning. We address the causal confusion problem in imitation learning, wherein learned policies attend to features which are only spuriously correlated with expert actions. Our proposed algorithm identifies and masks spuriously correlated features using causal inference techniques, and doesn’t require expert querying, expert reward function knowedge, or causal graph specification.
Asymmetric Certified Robustness via Feature-Convex Neural Networks. We introduce the asymmetric certified robustness problem, where the adversary is atempting to only induce false negatives. Leverages input-convex neural networks to provide fast, closed-form certified radii for this problem for any norm.
Safe Reinforcement Learning with Chance-constrained Model Predictive Control. We wrap a policy gradient agent with an MPC safety guide which trains the base policy to behave safely.
ContactNets: Learning Discontinuous Contact Dynamics with Smooth, Implicit Representations. We reparameterize the contact dynamics learning problem to handle nonsmooth impact and stiction. Evaluated on a novel real-world block tossing dataset.