• White Facebook Icon
  • White Instagram Icon
  • White Twitter Icon
  • White LinkedIn Icon

3rd ICML 2021 Workshop on Human in the Loop Learning

Recent years have witnessed the rising need for machine learning systems that have humans in the learning loop. Such systems can be applied to computer vision, natural language processing, robotics, and human-computer interaction. Creating and running such systems call for interdisciplinary research of artificial intelligence, machine learning, and cognitive science, which we abstract as Human in the Loop Learning (HILL). The HILL workshop aims to bring together researchers and practitioners working on the broad areas of HILL, ranging from the interactive/active learning algorithms for real-world decision-making systems (e.g., autonomous driving vehicles, robotic systems, etc.), lifelong learning systems that retain knowledge from different tasks and selectively transfer knowledge to learn new tasks over a lifetime, models with strong explainability, as well as human-inspired learning. The HILL workshop continues the previous effort to provide a platform for researchers from interdisciplinary areas to share their recent research. In this year’s workshop, a special feature is to encourage the exploration of human-inspired learning.

July 24

Virtual Conference

 

Call for Papers

We welcome high-quality submissions on algorithms and system designs in the broad area of human in the loop learning. A few (non-exhaustive) topics of interest include:

  • The topics of HILL include but are not limited to:

  • Interactive/Active machine learning algorithms for autonomous decision-making systems,

  • Lifelong learning systems that learn a sequence of tasks and leverage their shared structure to enable knowledge transfer over a lifetime,

  • Online learning and active learning,

  • Comparison of human in the loop learning and label-efficient learning,

  • Psychology driven human concept learning,

  • Explainable AI,

  • Human-inspired learning,

  • Design, testing, and assessment of interactive systems for data analytics,

  • Model understanding tools (debugging, visualization, introspection, etc.).

 

These topics span a variety of scientific disciplines and application domains like machine learning, human-computer interaction, cognitive science, and robotics. It is an opportunity for scientists in these disciplines to share their perspectives, discuss solutions to common problems and highlight the challenges in the field to help guide future research. The target audience for the workshop includes people who are interested in using machines to solve problems by having a human be an integral part of the learning process. 

We invite submissions of full papers, as well as works-in-progress, position papers, and papers describing open problems and challenges. While original contributions are preferred, we also invite submissions of high-quality work that has recently been published in other venues or is concurrently submitted. We encourage creative ML approaches, as well as interdisciplinarity and perspectives from outside traditional ML. Papers should be 4-8 pages in length (excluding references) formatted using the ICML template. All the submissions should be anonymous. The accepted papers are allowed to get submitted to other conference venues. 

Papers can be submitted through CMT:

hhttps://cmt3.research.microsoft.com/HILL2021

Important Dates

Submission deadline: 27th June 2021 (23:59 AoE)

Acceptance notification: 7th, July 2021 (23:59 AoE)

Workshop Date: 24th July

Accepted Papers on 2nd HILL Workshop at ICML 2021

Paper

PreferenceNet: Encoding Human Preferences in Auction Design

Neehar Peri (University of Maryland)*; Michael J Curry (University of Maryland College Park); Samuel Dooley (University of Maryland); John P Dickerson (University of Maryland)

Paper

IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance

Ruixuan Liu (Carnegie Mellon University)*; Changliu Liu (Carnegie Mellon University)

Paper

Machine Teaching with Generative Models for Human Learning

Michael Doron (The Broad Institute)*; Hussein Mozannar (Massachusetts Institute of Technology); David Sontag (MIT); Juan Caicedo (Broad Institute)

Paper

Supp

Differentiable Learning Under Triage

Nastaran Okati (MPI-SWS)*; Abir De (IIT Bombay); Manuel Gomez Rodriguez (MPI-SWS)

High Frequency EEG Artifact Detection with Uncertainty via Early Exit Paradigm

Paper

Lorena Qendro (University of Cambridge)*; Alex Campbell (University of Cambridge); Pietro Lió (University of Cambridge); Cecilia Mascolo (University of Cambridge)

Paper

Improving Human Decision-Making with Machine Learning

Hamsa Bastani (Wharton); Osbert Bastani (University of Pennsylvania); Wichinpong Sinchaisri (Berkeley Haas)*

Paper

Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos

Haoyu Xiong (Shanghai Qizhi Institute)*; Quanzhou Li (University of Toronto); Yun-Chun Chen (University of Toronto ); Homanga Bharadhwaj (University of Toronto, Vector Institute); Samarth Sinha (University of Toronto, Vector Institute); Animesh Garg (University of Toronto, Vector Institute, Nvidia)

Paper

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

Kim de Bie (University of Amsterdam)*; Ana Lucic (University of Amsterdam); Hinda Haned (University of Amsterdam)

Paper

Interpretable Machine Learning: Moving From Mythos to Diagnostics

alerie Chen (Carnegie Mellon University)*; Jeffrey Li (University of Washington); Joon Sik Kim (Carnegie Mellon University); Gregory Plumb (); Ameet Talwalkar (CMU)

Paper

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Angie W Boggust (MIT CSAIL)*; Benjamin Hoover (IBM Research); Arvind Satyanarayan (MIT CSAIL); Hendrik Strobelt (IBM Research)

Paper

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

Ana Lucic (University of Amsterdam)*; Maartje A ter Hoeve (University of Amsterdam); Gabriele Tolomei (University of Rome); Maarten de Rijke (University of Amsterdam & Ahold Delhaize); Fabrizio Silvestri (Sapienza, University of Rome)

Paper

Supp

Personalizing Pretrained Models

Mina Khan (Massachusetts Institute of Technology (MIT) Media Lab)*; Advait P Rane (BITS Pilani-K. K. Birla Goa Campus); Srivatsa P (National University of Singapore); Asadali Hazariwala (BITS Pilani Goa); Shriram Chenniappa (BITS Pilani Goa); Pattie Maes (Massachusetts Institute of Technology (MIT) )

Convergence of a Human-in-the-Loop Policy-Gradient Algorithm With Eligibility Trace Under Reward, Policy, and Advantage Feedback

Paper

Ishaan K Shah (Brown University); David M Halpern (Brown University)*; Michael L. Littman (Brown University); Kavosh Asadi (Brown University)

Paper

Effect of Combination of HBM and Certainty Sampling onWorkload of Semi-Automated Grey Literature Screening

JINGHUI LU (University College Dublin)*; Maeve Henchion (Teagasc Agriculture and Food Development Authority); Brian Mac Namee (University College Dublin )

Paper

A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions

Andreas Kirsch (University of Oxford)*; Sebastian Farquhar (University of Oxford); Yarin Gal (University of Oxford)

Paper

Active Learning under Pool Set Distribution Shift and Noisy Data

Andreas Kirsch (University of Oxford)*; Tom Rainforth (University of Oxford); Yarin Gal (University of Oxford)

Paper

Explaining Reinforcement Learning Policies through Counterfactual Trajectories

Julius Frost (Boston University)*; Olivia Watkins (UC Berkeley); Eric M Weiner (Harvey Mudd College); Pieter Abbeel (UC Berkeley); Trevor Darrell (UC Berkeley); Bryan Plummer (Boston University); Kate Saenko (Boston University)

Paper

Differentially Private Active Learning with Latent Space Optimization

Sen-ching S Cheung (University of Kentucky)*; Xiaoqing Zhu (Cisco); Herb Wildfeuer (Cisco); Chongruo Wu; Wai-tian Tan (Cisco)

Explicable Policy Search via Preference-Based Learning under Human Biases

Ze Gong (Arizona State University)*; Yu Zhang (ASU)

Paper

Paper

Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap

Gokul Swamy (Carnegie Mellon University)*; Sanjiban Choudhury (Aurora Innovation); Drew Bagnell; Steven Wu (Carnegie Mellon University)

Paper

On The State of Data In Computer Vision: Human Annotations Remain Indispensable for Developing Deep Learning Models.

Zeyad Emam (University of Maryland, College Park); Andrew Kondrich (Scale AI); Sasha Harrison (Sasha Harrison); Felix Lau (Scale AI); Yushi Wang (Scale AI); Aerin Kim (Scale AI)*; Elliot Branson (Scale AI)

Paper

ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind

Yuanfei Wang (Peking University)*; Fangwei Zhong (Peking University); Jing Xu (Peking University); Yizhou Wang (PKU)

Accelerating the Convergence of Human-in-the-Loop Reinforcement Learning with Counterfactual Explanations

Jakob Karalus (Ulm University )*; Felix Lindner (Ulm University)

Paper

Less is more: An Empirical Analysis of Model Compression for Dialogue

Ahmed O BARUWA (KPMG)*

Paper

Mitigating Sampling Bias and Improving Robustness in Active Learning

Ranganath Krishnan (Intel Labs)*; Alok Kumar Sinha (Intel); Nilesh A Ahuja (Intel); Mahesh Subedar (Intel); Omesh Tickoo (Intel); Ravi Iyer (Intel)

Paper

Paper

GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks

Lucie Charlotte Magister (University of Cambridge)*; Dmitry Kazhdan (University of Cambridge); Vikash Singh (Alphabet X); Pietro Lió (University of Cambridge)

Paper

Interpretable Video Transformers in Imitation Learning of Human Driving

Andrew Dai (Trinity College Dublin, Department of Civil, Structural and Environmental Engineering)*; Wenliang Qiu (Trinity College Dublin, Department of Civil, Structural and Environmental Engineering); Bidisha Ghosh (Trinity College Dublin, Department of Civil, Structural and Environmental Engineering)

Speakers of 2nd HILL Workshop at ICML 2020

Tom-Griffiths-Square-1024x1024.jpg

Professor at Princeton University

raquell-urtasun_Speaker-Headshots.jpg

Professor at University of Toronto

ChelseaFinn_hires.jpg

Assistant professor at Stanford University

Professor at University of Toronto

Professor at Tsinghua University

Assistant professor at UC Berkeley

Sergey Levine .jpeg
download.jpeg
Zhu_Wenwu_small.jpg
6241424883_75d3d197bb_b.jpg
photo_Kalesha_Bullard_headshot.jpg
Christian Lebiere Photo.jpg

Postdoc Researcher at Facebook AI Research (FAIR)

anx_481-web-e1539022490180.jpg

Assistant professor at UC Berkeley

Research Scientist at Carnegie Mellon University

Associate professor at Carnegie Mellon University

zenyap.jpg

Professor at

University of Tübingen

Agenda of 2nd HILL Workshop
at ICML 2020

 
 
 

Accepted Papers on 2nd HILL Workshop at ICML 2020

Paper

Poster

A Linear Bandit for Seasonal Environments

Giuseppe Di Benedetto (Oxford Univerisity); Vito Bellini (Amazon); Giovanni Zappella (Amazon)

Online Ride-Sharing Pricing with Fairness

Yupeng Li (University of Toronto/Shenzhen Research Institute of Big Data)*; Mengjia Xia (Cornell University); Dacheng Wen (The University of Hong Kong); Cheng Zhang (Didi Chuxing); Meng Ai (Didi Chuxing); Qun (Tracy) Li (DiDi)

Deep Active Learning: Unified and Principled Method for Query and Training

Changjian Shui (Université Laval)*; Fan Zhou (Laval University); Christian Gagné (Université Laval); Boyu Wang (University of Western Ontario)

GLAD: Localized Anomaly Detection via Human-in-the-Loop Learning

Md Rakibul Islam (Washington State university)*; Shubhomoy Das (School of EECS, Washington State University, Pullman); Janardhan Rao Doppa (Washington State University); Sriraam Natarajan (UT Dallas)

Human-Centric Efficiency Improvements in Image Annotation for Autonomous Driving

Frédéric Ratle (Samasource)*; Martine Bertrand (Samasource)

Online Learning for Distributed and Personal Recommendations - a Fair approach

Martin Tegnér (IKEA Retail, Oxford-Man Institute, University of Oxford)*

Yet Another Study on Active Learning and Human Pose Estimation

Sinan Kaplan (Lappeenranta University of Technology)*; Lasse Lensu (Lappeenranta University of Technology)

Program Synthesis with Pragmatic Communication

Yewen Pu (MIT)*; Marta Kryven (Massachusetts Institute of Technology); Kevin M Ellis (MIT); Joshua Tenenbaum (MIT); Armando Solar-Lezama (MIT)

Preference learning along multiple criteria: A game-theoretic perspective

Kush Bhatia (UC Berkeley)*; Ashwin Pananjady (UC Berkeley); Peter Bartlett (); Anca Dragan (EECS Department, University of California, Berkeley); Martin Wainwright (UC Berkeley)

Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations

Sarath Sreedharan (Arizona State University)*; Utkarsh Soni (Arizona State University); Mudit Verma (Arizona State University); Siddharth Srivastava (Arizona State University); Subbarao Kambhampati (Arizona State University)

Interactive Segmentation of RGB-D Indoor Scenes using Deep Learning

Maximilian Ruethlein (Friedrich-Alexander University Erlangen-Nuernberg)*; Franz Koeferl (Friedrich-Alexander University Erlangen-Nuernberg); Wolfgang Mehringer (Friedrich-Alexander University Erlangen-Nuernberg); Bjoern Eskofier (Friedrich-Alexander University Erlangen-Nuernberg)

Interactive learning of cognitive programs

Sunayana Rane (MIT)*; Miguel Lázaro-Gredilla (Vicarious AI); Dileep George (Vicarious )

The Need for Standardised Explainability

Othman Benchekroun (Dathena)*; Adel Rahimi (Dathena); Qini Zhang (Dathena); Tetiana Kodliuk (Dathena)

Quick Question: Interrupting Users for Microtasks with Reinforcement Learning

Bo-Jhang Ho (UCLA); Bharathan Balaji (Amazon)*; Mehmet Koseoglu (UCLA); Sandeep Singh Sandha (University of California - Los Angeles); Siyou Pei (UCLA); Mani Srivastava (UC Los Angeles)

Adwait Sahasrabhojanee (USRA/NASA Ames); David Iverson (NASA Ames); shawn r wolfe (); Kevin Bradner (NASA Ames); Nikunj Oza (NASA Ames)*

Active Learning Strategies to Reduce Anomaly Detection False Alarm Rates

SCRAM: Simple Checks for Realtime Analysis of Model Training for Non-Expert ML Programmers

Eldon Schoop (University of California, Berkeley)*; Forrest Huang (University of California, Berkeley); Bjoern Hartmann (University of California, Berkeley)

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

Laura M Smith (UC Berkeley)*; Nikita Dhawan (UC Berkeley); Marvin Zhang (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley)

Personalized Stress Detection with Self-supervised Learned Features

Stefan Matthes (Fortiss GmbH); Zhiwei Han (fortiss GmbH)*; Tianming Qiu (fortiss GmbH); Bruno Michel (IBM Zurich Research Lab); Sören Klinger (fortiss GmbH); Hao Shen (fortiss GmbH); Yuanting Liu (fortiss GmbH); Bashar Altakrouri (IBM Deutschland GmbH)

Metric-Free Individual Fairness in Online Learning

Yahav Bechavod (Hebrew University of Jerusalem)*; Steven Wu (University of Minnesota); Christopher Jung (University of Pennsylvania)

Bias in Multimodal AI: Testbed for Fair Automatic

Alejandro Peña (Universidad Autonoma de Madrid); Ignacio Serna (Universidad Autonoma de Madrid); Aythami Morales (Universidad Autonoma de Madrid)*; Julian Fierrez (Universidad Autonoma de Madrid)

Explanation Augmented Feedback in Human-in-the-Loop Reinforcement Learning

Lin Guan (Arizona State University)*; Mudit Verma (Arizona State University); Subbarao Kambhampati (Arizona State University)

Not all Failure Modes are Created Equal: Training Deep Neural Networks for Explicable (Mis)Classification

Alberto Olmo (Arizona State University)*; Sailik Sengupta (Arizona State University); Subbarao Kambhampati (Arizona State University)

Battlesnake Challenge: A Multi-agent Reinforcement Learning Playground with Human-in-the-loop

Jonathan Chung (Amazon Web Services)*; Runfei Luo (Amazon Web Services); Xavier Raffin (Amazon Web Services); Scott Perry (Amazon Web Services)

Faster Human-Machine Collaboration Bounding Box Annotation Framework Based on Active Learning

Minzhe Liu (Nanjing University)*; LI DU (Nanjing University); Yuan Du (Nanjing University); Ruofan Guo (Nanjing University); Xiaoliang Chen (University of California, Irvine)

Combining Human and Machine Intelligence to Assess Stroke Rehabilitation Exercises

Min Hun Lee (Carnegie Mellon University)*; Daniel Siewiorek (Carnegie Mellon University); Asim Smailagic (Carnegie Mellon University); Alexandre Bernardino (Instituto Superior Técnico); Sergi Bermudez (University of Madeira)

Personalized Size Recommendations with Human in the Loop

Leonidas Lefakis (Zalando)*; Evgenii Koriagin (Zalando SE); Julia Lasserre (Zalando Research); Reza Shirvany (Zalando SE)

A Prospective Human-in-the-Loop Experiment using Reinforcement Learning with Planning for Optimizing Energy Demand Response

Lucas Spangher (U.C. Berkeley)*; Manan Khattar (University of California at Berkeley); Akash Gokul (University of California at Berkeley); Akaash Tawade (University of California at Berkeley); Adam Bouyamourn (University of California at Berkeley); Alex R Devonport (U.C. Berkeley); Costas J. Spanos (University of California at Berkeley)

Learning Interpretable Models for Black-Box Agents

Pulkit Verma (Arizona State University)*; Siddharth Srivastava (Arizona State University)

Assisted Robust Reward Design

Jerry Zhi-Yang He (EECS Department, University of California, Berkeley)*; Anca Dragan (EECS Department, University of California, Berkeley)

Better Transferability with Attribute Attention for Generalized Zero-Shot Learning

Ruofan Guo (Nanjing University)*; LI DU (Nanjing University); Yuan Du (Nanjing University); Minzhe Liu (Nanjing University); Xiaoliang Chen (University of California, Irvine)

Human Explanation-based Learning for Machine Comprehension

Qinyuan Ye (University of Southern California)*; Xiao Huang (University of Southern California); Elizabeth Boschee (University of Southern California); Xiang Ren (University of Southern California)

Soliciting Stakeholders’ Fairness Notions in Child Maltreatment Predictive Systems

Hao-Fei Cheng (University of Minnesota)*; Paige Bullock (Kenyon College); Alexandra Chouldechova (CMU); Steven Wu (University of Minnesota); Haiyi Zhu (Carnegie Mellon University)

Improve black-box sequential anomaly detector relevancy with limited user feedback

Chris Kong (Amazon research); Lifan Chen (Amazon research); Ming Chen (Amazon research); Laurent Callot (Amazon research)*; Parminder Bhatia (Amazon)

Feature Expansive Reward Learning: Rethinking Human Input

Andreea Bobu (UC Berkeley)*; Marius Wiggert (UC Berkeley); Claire Tomlin (UC Berkeley); Anca Dragan (EECS Department, University of California, Berkeley)

 
 

Organizers

Shanghang Zhang, UC Berkeley
Xin Wang, PhD at UC Berkeley,
Shiji Zhou, PhD at Tsinghua University,
Fisher Yu, Assistant professor at ETH Zurich,
Li Erran Li, Senior applied scientist at Amazon,
Kalesha Bullard, Postdoc. Researcher at F,
Wenwu Zhu, Professor at Tsinghua University, 
Trevor Darrell, Professor at UC Berkeley
Pradeep Ravikumar, Associate professor at CMU
Zeynep Akata, Professor at University of Tübingen