Andrew Forney

Associate Professor of Computer Science

  • Los Angeles CA UNITED STATES
  • Computer Science

Seaver College of Science and Engineering

Contact

Biography

Contact
Phone: 310.338.3077
Email: andrew.forney@lmu.edu
Office: Doolan 201A

Professor Andrew Forney is a third-generation Lion who completed his B.S. in Computer Science and Psychology at Loyola Marymount University and his M.S. and Ph.D. in Computer Science at UCLA. His research bridges the disciplines of cognitive psychology, artificial intelligence (AI), and experimental design, with an emphasis in causal inference. He is interested in the mechanics of counterfactual reasoning and its applications to human and machine learning, reinforcement, and decision-making for crafting increasingly dynamic and metacognitive agents. His work has been published and presented in the flagship AI conferences (NeurIPS, ICML, AAAI) and Journal of Causal Inference. Professor Forney’s other interests include pedagogical methods for inclusive and engaging undergraduate STEM experiences, and the entanglement of increasingly advanced AI in society. He leads the Applied Cognitive Technologies (ACT) Laboratory at LMU, which provides undergraduates with research and industry experience using state-of-the-art techniques in AI applied to real world problems.

Education

University of California, Los Angeles

Ph.D.

Computer Science

2018

University of California, Los Angeles

M.S.

Computer Science

2015

Loyola Marymount University

B.S.

Computer Science

2012

Areas of Expertise

Causal Inference
Decision Theory
Reinforcement Learning
Artifical Intelligence
Experimental Design

Industry Expertise

Research
Education/Learning
Computer Software

Media Appearances

Lions Develop App to Exonerate the Innocent

LMU  print

2021-05-20

An expose on Briefcase, one of the ACT Lab's flagship projects using AI to help lawyers at LMU's Project for the Innocent to exonerate the wrongfully incarcerated.

View More

LMU Startup Demystifies the Arcane Legislative Process

LMU  print

2019-12-05

Highlighting the efforts of LMU Alumni and ACT Lab graduates Patrick Utz and Mo Hayat in creating a multi-million dollar startup from their senior project. See https://www.abstract.us/ for recent developments.

View More

Articles

Causal Inference in AI Education: A Primer

Journal of Causal Inference

A. Forney, S. Mueller

2022-09-01

The study of causal inference has seen recent momentum in machine learning and artificial intelligence (AI), particularly in the domains of transfer learning, reinforcement learning, automated diagnostics, and explainability (among others). Yet, despite its increasing application to address many of the boundaries in modern AI, causal topics remain absent in most AI curricula. This work seeks to bridge this gap by providing classroom-ready introductions that integrate into traditional topics in AI, suggests intuitive graphical tools for application to both new and traditional lessons in probabilistic and causal reasoning, and presents avenues for instructors to impress the merit of climbing the “causal hierarchy” to address problems at the levels of associational, interventional, and counterfactual inference. Lastly, this study shares anecdotal instructor experiences, successes, and challenges integrating these lessons at multiple levels of education.

Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments

Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems

A. Browne, A. Forney

2022-09-05

Autonomous agents may encounter the transportability problem when they suffer performance deficits from training in an environment that differs in key respects from that in which they are deployed. Although a causal treatment of transportability has been studied in the data sciences, the present work expands its utility into online, multi-agent, reinforcement learning systems in which agents are capable of both experimenting within their own environments and observing the choices of agents in separate, potentially different ones. In order to accelerate learning, agents in these Multi-agent Transport (MAT) problems face the unique challenge of determining which agents are acting in similar environments, and if so, how to incorporate these observations into their policy. We propose and compare several agent policies that exploit local similarities between environments using causal selection diagrams, demonstrating that optimal policies are learned more quickly than in baseline agents that do not. Simulation results support the efficacy of these new agents in a novel variant of the Multi-Armed Bandit problem with MAT environments.

Counterfactual randomization: rescuing experimental studies from obscured confounding

Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 2454-2461

A. Forney, E. Bareinboim

2019-07-17

Randomized clinical trials (RCTs) like those conducted by the FDA provide medical practitioners with average effects of treatments, and are generally more desirable than observational studies due to their control of unobserved confounders (UCs), viz., latent factors that influence both treatment and recovery. However, recent results from causal inference have shown that randomization results in a subsequent loss of information about the UCs, which may impede treatment efficacy if left uncontrolled in practice (Bareinboim, Forney, and Pearl 2015). Our paper presents a novel experimental design that can be noninvasively layered atop past and future RCTs to not only expose the presence of UCs in a system, but also reveal patient-and practitioner-specific treatment effects in order to improve decision-making. Applications are given to personalized medicine, second opinions in diagnosis, and employing offline results in online recommender systems.

Show All +
Powered by