I’m Samuel Pfrommer, a fourth-year PhD student at Berkeley EECS working with Somayeh Sojoudi. My research interests span robust machine learning, safe reinforcement learning, and geometric deep learning.
I’m looking for industry positions starting May 2025.
sam.pfrommer@berkeley(dot)edu
Selected research
Ranking Manipulation for Conversational Search Engines (EMNLP 2024, Main). Major search engine providers are rapidly incorporating Large Language Model (LLM)-generated content in response to user queries. These conversational search engines operate by loading retrieved website text into the LLM context for summarization and interpretation. This work investigates the impact of adversarial prompt injections on the ranking order of sources referenced by conversational search engines.
Transport of Algebraic Structure to Latent Embeddings (ICML 2024, Spotlight). In machine learning, it is common to produce latent embeddings of objects which live in algebraic spaces (e.g., sets, functions, and probability distributions). We present a principled approach for learning latent-space operations which correspond to operations on the underlying data-space algebra. Our approach constructs parameterizations which provably satisfy applicable algebraic laws such as commutativity, distributivity, and associativity.
Initial State Interventions for Deconfounded Imitation Learning (CDC 2023). We address the causal confusion problem in imitation learning, wherein learned policies attend to features which are only spuriously correlated with expert actions. Our proposed algorithm identifies and masks spuriously correlated features using causal inference techniques, and doesn’t require expert querying, expert reward function knowedge, or causal graph specification.
Asymmetric Certified Robustness via Feature-Convex Neural Networks (NeurIPS 2023). We introduce the asymmetric certified robustness problem, where the adversary is atempting to only induce false negatives. Leverages input-convex neural networks to provide fast, closed-form certified radii for this problem for any norm.
Safe Reinforcement Learning with Chance-constrained Model Predictive Control (L4DC 2022). We wrap a policy gradient agent with an MPC safety guide which trains the base policy to behave safely.
ContactNets: Learning Discontinuous Contact Dynamics with Smooth, Implicit Representations (CoRL 2020). We reparameterize the contact dynamics learning problem to handle nonsmooth impact and stiction. Our method compares favorably against unstructured baselines on a novel real-world block tossing dataset.
Selected projects
TorchExplorer. TorchExplorer is a general-purpose tool to see what’s happening in your network—analagous to an oscilloscope in electronics. It interactively visualizes model structure and input/output/parameter histograms during training. It integrates with weights and biases and can also run locally.