Program Overview

The schedule and the links to the virtual rooms are provided at

     https://underline.io/events/288/schedule?day=2022-05-10T22%3A00%3A00.000Z&trackId=1488

Time Slots

 

SLOT A

3.00*-7.00* Auckland

23.00-3.00* Beijing

20.30-0.30* Kolkata

17.00-21.00 Paris

11.00-15.00 New York

SLOT B

11.00-15.00 Auckland

7.00-11.00 Beijing

4.30-8.30 Kolkata

1.00-5.00 Paris

19.00-23.00** New York

SLOT C

19.00-23.00 Auckland

15.00-19.00 Beijing

12.30-16.30 Kolkata

9.00-13.00 Paris

3.00-7.00 New York

* means “next day w.r.t. the other dates”
** means “previous day w.r.t. the other dates”

Program Schedule (Dates and times are given in Auckland time zone)

 

AAMAS DAY 1

 SLOT C https://www.timeanddate.com/worldclock/converter.html?iso=20220511T070000&p1=179&p2=195&p3=54&p4=33&p5=22
18.40 – 19.00 May 11th, 18.40 – 19.00 Plenary session: 1st Opening
19.00 – 20.00  May 11th, 19.00 – 20.00  Plenary session: Invited talk by Joanna Seibt: “The Aims of Social Robotics”
20.00 – 21.00 May 11th, 20.00 – 21.00  1C1-1 1C2-1 1C3-1 1C4-1 1C5-1 1C6-1
21.00 – 22.00 May 11th, 21.00 – 22.00  1C1-2 1C2-2 1C3-2 1C4-2 1C5-2 1C6-2
22.00 – 23.00  May 11th, 22.00 – 23.00  Poster & Demo Session PD1C
 SLOT A https://www.timeanddate.com/worldclock/converter.html?iso=20220511T150000&p1=179&p2=195&p3=54&p4=33&p5=22
2.40-3.00 May 12th, 2.40-3.00 Plenary session: 2nd Opening
3.00-4.00 May 12th, 3.00-4.00 1A1-1 1A2-1 1A3-1 1A4-1 1A5-1 1A6-1 [D&I Activity 1] Workshop on Artificial Intelligence – Diversity, Belonging, Equity, and Inclusion (AIDBEI)
4.00-5.00 May 12th, 4.00-5.00 1A1-2 1A2-2 1A3-2 1A4-2 1A5-2 1A6-2
5.00-6.00 May 12th, 5.00-6.00 1A1-3 1A2-3 1A3-3 1A4-3 1A5-3 1A6-3
6.00-7.00 May 12th, 6.00-7.00 Plenary session: Invited talk by Shafi Goldwasser: “Safe ML: Robustness, Verification and Privacy”

 

AAMAS DAY 2

 SLOT B https://www.timeanddate.com/worldclock/converter.html?iso=20220511T230000&p1=179&p2=195&p3=54&p4=33&p5=22 
10.40-11.00 May 12th, 10.40-11.00 Plenary session: 3rd Opening
11.00-12.00 May 12th, 11.00-12.00 Plenary session: Invited talk by Mark Sagar: “Autonomous Animation”
12.00-13.00 May 12th, 12.00-13.00 2B1-1 2B2-1 2B3-1 2B4-1 2B5-1 [D&I Activity 2] Queer in AI Social
13.00-14.00 May 12th, 13.00-14.00 2B1-2 2B2-2 2B3-2 2B4-2
14.00-15.00 May 12th, 14.00-15.00 Poster & Demo Session PD2B
 SLOT C https://www.timeanddate.com/worldclock/converter.html?iso=20220512T070000&p1=179&p2=195&p3=54&p4=33&p5=22    
18.40 – 19.00
19.00 – 20.00  May 12th, 19.00 – 20.00  2C1-1 2C2-1 2C3-1 2C4-1 2C5-1
20.00 – 21.00 May 12th, 20.00 – 21.00  2C1-2 2C2-2 2C3-2 2C4-2 2C5-2
21.00 – 22.00 May 12th, 21.00 – 22.00  2C1-3 2C2-3 2C3-3 2C4-3 2C5-3
22.00 – 23.00  May 12th, 22.00 – 23.00  Poster & Demo Session PD2C
 SLOT A https://www.timeanddate.com/worldclock/converter.html?iso=20220512T150000&p1=179&p2=195&p3=54&p4=33&p5=22 
2.40-3.00
3.00-4.00 May 13th, 3.00-4.00 Plenary session: Invited talk by Maria Gini: “Decentralized allocation of tasks to agents and robots”
4.00-5.00 May 13th, 4.00-5.00 2A1-1 2A2-1 2A3-1 2A4-1 2A5-1 2A6-1 2A7-1
5.00-6.00 May 13th, 5.00-6.00 2A1-2 2A2-2 2A3-2 2A4-2 2A5-2 2A6-2 2A7-2
6.00-7.00 May 13th, 6.00-7.00 2A1-3 2A2-3 2A3-3 2A4-3 2A5-3 2A6-3

AAMAS DAY 3

 SLOT B https://www.timeanddate.com/worldclock/converter.html?iso=20220512T230000&p1=179&p2=195&p3=54&p4=33&p5=22  
10.40-11.00
11.00-12.00 May 13th, 11.00-12.00 Plenary session: Invited talk by Bryan Wilder: “AI for Population Health: Melding Data and Algorithms on Networks”
12.00-13.00 May 13th, 12.00-13.00 3B1-1 3B2-1 3B3-1 3B4-1 3B5-1 3B6-1
13.00-14.00 May 13th, 13.00-14.00 3B1-2 3B2-2 3B3-2 3B4-2 3B5-2 3B6-2
14.00-15.00 May 13th, 14.00-15.00 Poster & Demo Session PD3B
 SLOT C https://www.timeanddate.com/worldclock/converter.html?iso=20220513T070000&p1=179&p2=195&p3=54&p4=33&p5=22  
18.40 – 19.00
19.00 – 20.00  May 13th, 19.00 – 20.00  3C1-1 3C2-1 3C3-1 3C4-1 3C5-1
20.00 – 21.00 May 13th, 20.00 – 21.00  3C1-2 3C2-2 3C3-2 3C4-2 3C5-2 3C6-2
21.00 – 22.00 May 13th, 21.00 – 22.00  Plenary session: closing, followed by business meeting 
22.00 – 23.00 

Paper Sessions

 

Schedule of the AAMAS main track full papers, Blue Sky papers, JAAMAS papers

 

Session Session Chair Papers

AAMAS DAY 1, SLOT C, May 11th, 20.00–22.00 NZST (UTC +12)

1C1-1 Jan Maly

SC&CGT: Equilibria in Schelling Games: Computational Hardness and Robustness
Luca Kreisel, Niclas Boehmer, Vincent Froese and Rolf Niedermeier

SC&CGT: Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous Preferences
Niclas Boehmer, Markus Brill and Ulrike Schmidt-Kraepelin

SC&CGT: Position-based Matching with Multi-Modal Preferences
Yinghui Wen, Aizhong Zhou and Jiong Guo

SC&CGT: Coalition Formation Games and Social Ranking Solutions
Roberto Lucchetti, Stefano Moretti and Tommaso Rea

1C1-2 Katsuhide Fujita

MA&NCGT: Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games
Gabriel Istrate and Cosmin Bonchiş

MA&NCGT: Corruption in Auctions: Social Welfare Loss in Hybrid Multi-Unit Auctions
Andries van Beek, Ruben Brokkelkamp and Guido Schaefer

MA&NCGT: Automated Configuration and Usage of Strategy Portfolios for Bargaining
Bram Renting, Holger Hoos and Catholijn Jonker

MA&NCGT: Incentives to Invite Others to Form Larger Coalitions
Yao Zhang and Dengji Zhao

1C2-1 Tibor Bosse

JAAMAS, HUM: Trust repair in human-agent teams: the effectiveness ofexplanations and expressing regret
Esther Kox, José Kerstholt, Tom Hueting and Peter de Vries

HUM: Explainability in Multi-Agent Path/Motion Planning: User-study-driven taxonomy and requirements
Martim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff and Amanda Coles

HUM: Building contrastive explanations for multi-agent team formation
Athina Georgara, Juan Antonio Rodriguez Aguilar and Carles Sierra

HUM: Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards Individualized and Explainable Robotic Support in Everyday Activities
Alexander Wich, Holger Schultheis and Michael Beetz

1C2-2 Zehong (Jimmy) Cao

HUM: Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions
Tom Bewley and Freddy LeCue

HUM: CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning Agents
Joe McCalmon, Thai Le, Sarra Alqahtani and Dongwon Lee

HUM: Sympathy based Reinforcement Learning agents
Manisha Senadeera, Thommen George Karimpanal, Santu Rana and Sunil Gupta

1C3-1 Emmanouil Rigas

APP: Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit Portions
Gauthier Picard

BLUE SKY, APP: “Go to the Children”: Rethinking to Intelligent Agent Design and Programming in a Developmental Learning Perspective
Alessandro Ricci

APP: Deep Reinforcement Learning for Active Wake Control
Grigory Neustroev, Sytze P.E. Andringa, Remco A. Verzijlbergh and Mathijs M. De Weerdt

BLUE SKY, APP: Agent-Assisted Life-Long Education and Learning
Tomas Trescak, Roger Lera Leri, Filippo Bistaffa and Juan Antonio Rodriguez Aguilar

1C3-2 Angelo Ferrando

ROBO: Coordinated Multi-Agent Path Finding for Drones and Trucks over Road Networks
Shushman Choudhury, Kiril Solovey, Mykel J. Kochenderfer and Marco Pavone

ROBO: Context-Aware Modelling for Multi-Robot Systems Under Uncertainty
Charlie Street, Bruno Lacerda, Michal Staniaszek, Manuel Mühlig and Nick Hawes

ROBO: Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation
Adrian Simon Bauer, Anne Köpken and Daniel Leidner

ROBO: Refined Hardness of Distance-Optimal Multi-Agent Path Finding
Tzvika Geft and Dan Halperin

1C4-1 Minming Li

JAAMAS, KRRP, MA&NCGT: GDL as a Unifying Domain Description Language for Declarative Automated Negotiation (Extended Abstract)
Dave de Jonge and Dongmo Zhang

JAAMAS, MA&NCGT: Combining quantitative and qualitative reasoning in concurrent multi-player games
Nils Bulling and Valentin Goranko

JAAMAS, KRRP, MA&NCGT: Concurrent Negotiations with Global Utility Functions
Yasser Mohammad and Shinji Nakadai

MA&NCGT: Anti-Malware Sandbox Games
Sujoy Sikdar, Sikai Ruan, Qishen Han, Paween Pitimanaaree, Jeremy Blackthorne, Bulent Yener and Lirong Xia

1C4-2 Michael Winikoff

APP: Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality
Deepeka Garg, Maria Chli and George Vogiatzis

APP: Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic Management
Gauthier Picard

APP: Hierarchical Value Decomposition for Effective On-demand Ride-Pooling
Jiang Hao and Pradeep Varakantham
EMAS: FCMNet: Full Communication Memory Net for Team-Level Cooperation in Multi-Agent Systems
Yutong Wang and Guillaume Sartoretti

1C5-1 Natalia Criado

COIN: GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning
Jingqing Ruan, Yali Du, Xuantang Xiong, Dengpeng Xing, Xiyun Li, Linghui Meng, Haifeng Zhang, Jun Wang and Bo Xu

COIN: Learning Efficient Diverse Communication for Cooperative Heterogeneous Teaming
Esmaeil Seraj, Zheyuan Wang, Rohan Paleja, Daniel Martin, Matthew Sklar, Anirudh Patel and Matthew Gombolay

KRRP, MA&NCGT: Reasoning about Human-Friendly Strategies in Repeated Keyword Auctions
Francesco Belardinelli, Wojtek Jamroga, Vadim Malvone, Munyque Mittelmann, Aniello Murano and Laurent Perrussel

COIN: A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large Settings
Panayiotis Danassis, Aleksei Triastcyn and Boi Faltings

1C5-2 Badran Raddaoui

BLUE SKY, COIN: Macro Ethics for Governing Equitable Sociotechnical Systems
Jessica Woodgate and Nirav Ajmeri

COIN: Ensemble and Incremental Learning for Norm Violation Detection
Thiago Freitas Dos Santos, Nardine Osman and Marco Schorlemmer

KRRP: Online Collective Multiagent Planning by Offline Policy Reuse with Applications to City-Scale Mobility-on-Demand Systems
Wanyuan Wang, Gerong Wu, Weiwei Wu, Yichuan Jiang and Bo An

1C6-1 Xenofon Evangelopoulos

LEARN: Individual-Level Inverse Reinforcement Learning for Mean Field Games
Yang Chen, Libo Zhang, Jiamou Liu and Shuyue Hu

LEARN: REMAX: Relational Representation for Multi-Agent Exploration
Heechang Ryu, Hayong Shin and Jinkyoo Park

LEARN: Learning to Transfer Role Assignment Across Team Sizes
Dung Nguyen, Phuoc Nguyen, Svetha Venkatesh and Truyen Tran

LEARN: Scalable Multi-Agent Model-Based Reinforcement Learning
Vladimir Egorov and Aleksei Shpilman

1C6-2 Emma Norling

EMAS: ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative Charging
Yongheng Liang, Hejun Wu and Haitao Wang

HUM: Multimodal analysis of the predictability of hand-gesture properties
Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström and Gustav Eje Henter

HUM: Coaching agent: making recommendations for behavior change. A case study on improving eating habits
Jules Vandeputte, Antoine Cornuéjols, Nicolas Darcel, Fabien Delaere and Christine Martin

AAMAS DAY 1, SLOT A, May 12th, 3.00-6.00 NZST (UTC +12)

1A1-1 Martin Bullinger

SC&CGT: Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based Quotas
Santhini K. A., Govind S. Sankar and Meghana Nasre

SC&CGT: Three-Dimensional Popular Matching with Cyclic Preferences
Ágnes Cseh and Jannik Peters

SC&CGT: Fair Stable Matching Meets Correlated Preferences
Angelina Brilliantova and Hadi Hosseini

SC&CGT: Computing Balanced Solutions for Large International Kidney Exchange Schemes
Marton Benedek, Peter Biro, Walter Kern and Daniel Paulusma

1A1-2 Noam Hazon

SC&CGT: Multiagent Model-based Credit Assignment for Continuous Control
Dongge Han, Chris Xiaoxuan Lu, Tomasz Michalak and Michael Wooldridge

SC&CGT: Efficient Approximation Algorithms for the Inverse Semivalue Problem
Ilias Diakonikolas, Chrystalla Pavlou, John Peebles and Alistair Stewart

SC&CGT: Pareto optimal and popular house allocation with lower and upper quotas
Ágnes Cseh, Tobias Friedrich and Jannik Peters

JAAMAS, SC&CGT: Towards addressing dynamic multi-agent task allocation in law enforcement
Itshak Tkach and Sofia Amador Nelke

1A1-3 Nisarg Shah

SC&CGT: A Graph-Based Algorithm for the Automated Justification of Collective Decisions
Oliviero Nardi, Arthur Boixel and Ulle Endriss

SC&CGT: Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population Distribution
Allan Borodin, Omer Lev, Nisarg Shah and Tyrone Strangway

SC&CGT: How to Fairly Allocate Easy and Difficult Chores
Soroush Ebadian, Dominik Peters and Nisarg Shah
APP: Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit Portions
Gauthier Picard

1A2-1 Georgios Anagnostopoulos

LEARN: Scalable Multi-Agent Model-Based Reinforcement Learning
Vladimir Egorov and Aleksei Shpilman

LEARN: Pareto Conditioned Networks
Mathieu Reymond, Eugenio Bargiacchi and Ann Nowé

LEARN: Unbiased Asymmetric Reinforcement Learning under Partial Observability
Andrea Baisero and Christopher Amato

LEARN: Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement Learning
Milad Kazemi, Mateo Perez, Fabio Somenzi, Sadegh Soudjani, Ashutosh Trivedi and Alvaro Velasquez

1A2-2 Felipe Leno da Silva

LEARN: MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning
Markus Peschl, Arkady Zgonnikov, Frans Oliehoek and Luciano Siebert

LEARN: BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs
Sammie Katt, Hai Nguyen, Frans Oliehoek and Christopher Amato

LEARN: Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning
Seung Hyun Kim, Neale Van Stralen, Girish Chowdhary and Huy T. Tran

LEARN: Multi-Objective Reinforcement Learning with Non-Linear Scalarization
Mridul Agarwal, Vaneet Aggarwal and Tian Lan

1A2-3 Diodato Ferraioli

MA&NCGT: Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent
Ian Gemp, Rahul Savani, Marc Lanctot, Yoram Bachrach, Thomas Anthony, Richard Everett, Andrea Tacchetti, Tom Eccles and Janos Kramar

MA&NCGT: Robust No-Regret Learning in Min-Max Stackelberg Games
Denizalp Goktas, Jiayi Zhao and Amy Greenwald

MA&NCGT: A path-following polynomial equations systems approach for computing Nash equilibria
Hélène Fargier, Paul Jourdan and Régis Sabbadin

MA&NCGT: One-Sided Matching Markets with Endowments: Equilibria and Algorithms
Jugal Garg, Thorben Tröbst and Vijay Vazirani

1A3-1 Roberta Calegari

KRRP: Betweenness Centrality in Multi-Agent Path Finding
Eric Ewing, Jingyao Ren, Dhvani Kansara, Vikraman Sathiyanarayanan and Nora Ayanian

KRRP: Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph Pruning
Matej Husár, Jiří Švancara, Philipp Obermeier, Roman Barták and Torsten Schaub

KRRP: CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces
Keisuke Okumura, Ryo Yonetani, Mai Nishimura and Asako Kanezaki

KRRP: Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal Planning
Yaniel Carreno, Jun Hao Alvin Ng, Yvan Petillot and Ron Petrick

1A3-2 Sasha Rubin

KRRP: Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss
Mustafa O. Karabag, Cyrus Neary and Ufuk Topcu

KRRP: Graphical Representation Enhances Human Compliance with Principles for Graded Argumentation Semantics
Srdjan Vesic, Bruno Yun and Predrag Teovanovic

KRRP: Multiagent Dynamics of Gradual Argumentation Semantics
Louise Dupuis, Elise Bonzon and Nicolas Maudet

KRRP: Socially supervised representation learning: the role of subjectivity in learning efficient representations
Julius Taylor, Eleni Nisioti and Clément Moulin-Frier

1A3-3 Rishi Hazra

KRRP: Epistemic Reasoning in Jason
Michael Vezina and Babak Esfandiari

KRRP: Logical Theories of Collective Attitudes and the Belief Base Perspective
Emiliano Lorini and Eloan Rapion

KRRP: BOID*: Autonomous goal deliberation through abduction
Stipe Pandzic, Jan Broersen and Henk Aarts

KRRP: Preference-Based Goal Refinement in BDI Agents
Mostafa Mohajeriparizi, Giovanni Sileno and Tom Van Engers

1A4-1 Chris Amato

LEARN: Off-Policy Evolutionary Reinforcement Learning with Maximum Mutations
Karush Suri

LEARN: Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning
Baicen Xiao, Bhaskar Ramasubramanian and Radha Poovendran

LEARN: Poincaré-Bendixson Limit Sets in Multi-Agent Learning
Aleksander Czechowski and Georgios Piliouras

LEARN: Characterizing Attacks on Deep Reinforcement Learning
Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li and Dawn Song

1A4-2 Jorge Gómez

SIM: Asynchronous Opinion Dynamics in Social Networks
Petra Berenbrink, Martin Hoefer, Dominik Kaaser, Pascal Lenzner, Malin Rau and Daniel Schmand

SIM: Cascades and Overexposure in Social Networks: The Budgeted Case
Mohammad Irfan, Kim Hancock and Laura Friel

SIM, SC&CGT: Cooperation and learning dynamics under risk diversity and financial incentives
Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed Chetouani and Francisco C. Santos

SIM: Segregation in social networks of heterogeneous agents acting under incomplete information
D. Kai Zhang and Alexander Carver

1A4-3 Nirav Ajmeri

SIM: Knowledge Transmission and Improvement Across Generations do not Need Strong Selection
Yasser Bourahla, Manuel Atencia and Jérôme Euzenat

SIM: Properties of Reputation Lag Attack Strategies
Sean Sirur and Tim Muller

COIN, SIM: Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against It
Ashay Aswale, Antonio Lopez, Aukkawut Ammartayakun and Carlo Pinciroli

SIM: Agent-based modeling and simulation for malware spreading in D2D networks
Ziyad Benomar, Chaima Ghribi, Elie Cali, Alexander Hinsen and Benedikt Jahnel

1A5-1 Ayan Mukhopadhyay

LEARN: Emergent Cooperation from Mutual Acknowledgment Exchange
Thomy Phan, Felix Sommer, Philipp Altmann, Fabian Ritz, Lenz Belzner and Claudia Linnhoff-Popien

APP, SC&CGT, MA&NCGT: Revenue and User Traffic Maximization in Mobile Short-Video Advertising
Dezhi Ran, Weiqiang Zheng, Yunqi Li, Kaigui Bian, Jie Zhang and Xiaotie Deng

APP, MA&NCGT: Networked Restless Multi-Armed Bandits for Mobile Interventions
Han Ching Ou, Christoph Siebenbrunner, Jackson Killian, Meredith B Brooks, David Kempe, Yevgeniy Vorobeychik and Milind Tambe

KRRP, SC&CGT: Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems
Aditya Mate, Arpita Biswas, Christoph Siebenbrunner, Susobhan Ghosh and Milind Tambe

1A5-2 Hadi Hosseini

SC&CGT: Ordinal Maximin Share Approximation for Chores
Hadi Hosseini, Andrew Searns and Erel Segal-Halevi

MA&NCGT: Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting
Matteo Castiglioni, Alberto Marchesi and Nicola Gatti

MA&NCGT: Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games
Gabriel Istrate and Cosmin Bonchiş

JAAMAS, KRRP, MA&NCGT: GDL as a Unifying Domain Description Language for Declarative Automated Negotiation (Extended Abstract)
Dave de Jonge and Dongmo Zhang

1A5-3 Pieter Libin

LEARN: How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents
Miguel Vasco, Hang Yin, Francisco S. Melo and Ana Paiva

LEARN: The Dynamics of Q-learning in Population Games: a Physics-inspired Continuity Equation Model
Shuyue Hu, Chin-Wing Leung, Ho-Fung Leung and Harold Soh

LEARN: A Mean Field Game Model of Spatial Evolutionary Games
Vincent Hsiao and Dana Nau

LEARN: Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration
Lukas Schäfer, Filippos Christianos, Josiah P. Hanna and Stefano V. Albrecht

1A6-1 Neil Yorke-Smith

EMAS: Pippi: Practical Protocol Instantiation
Samuel Christie, Amit Chopra and Munindar Singh

HUM: Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents
Peizhu Qian and Vaibhav Unhelkar

ROBO: State Supervised Steering Function for Sampling-based Kinodynamic Planning
Pranav Atreya and Joydeep Biswas

APP: Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic Management
Gauthier Picard

1A6-2 Athirai A. Irissappane

LEARN: Multi-Agent Curricula and Emergent Implicit Signaling
Niko Grupen, Daniel Lee and Bart Selman

LEARN: Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSRO
Paul Muller, Mark Rowland, Romuald Elie, Georgios Piliouras, Julien Perolat, Mathieu Lauriere, Raphael Marinier, Olivier Pietquin and Karl Tuyls

LEARN: Lyapunov Exponents for Diversity in Differentiable Games
Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz and Jakob Foerster

LEARN: ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments
Yash Shukla, Christopher Thierauf, Ramtin Hosseini, Gyan Tatiya and Jivko Sinapov

1A6-3 Jivko Sinapov

LEARN: Scaling Mean Field Games with Online Mirror Descent
Julien Pérolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls and Olivier Pietquin

ROBO: Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning
Naman Shah and Siddharth Srivastava

LEARN: Any-Play: an Intrinsic Augmentation for Zero-Shot Coordination
Keane Lucas and Ross Allen

BLUE SKY, LEARN: Towards Anomaly Detection in Reinforcement Learning
Robert Müller, Steffen Illium, Thomy Phan, Tom Haider and Claudia Linnhoff-Popien

AAMAS DAY 2, SLOT B, May 12th, 12.00-14.00 NZST (UTC +12)

2B1-1 Alan Tsang

SC&CGT: Ordinal Maximin Share Approximation for Chores
Hadi Hosseini, Andrew Searns and Erel Segal-Halevi

SC&CGT: Beyond Cake Cutting: Allocating Homogeneous Divisible Goods
Ioannis Caragiannis, Vasilis Gkatzelis, Alexandros Psomas and Daniel Schoepflin

SC&CGT: Fair and Truthful Mechanism with Limited Subsidy
Hiromichi Goko, Ayumi Igarashi, Yasushi Kawase, Kazuhisa Makino, Hanna Sumita, Akihisa Tamura, Yu Yokoi and Makoto Yokoo

SC&CGT: Multivariate Algorithmics for Eliminating Envy by Donating Goods
Niclas Boehmer, Robert Bredereck, Klaus Heeger, Dušan Knop and Junjie Luo

2B1-2 Makoto  Yokoo

BLUE SKY, SC&CGT: Social Choice Around the Block: On the Computational Social Choice of Blockchain
Davide Grossi

SC&CGT: How Hard is Bribery in Elections with Randomly Selected Voters
Liangde Tao, Lin Chen, Lei Xu, Weidong Shi, Ahmed Sunny and Md Mahabub Uz Zaman

BLUE SKY, SC&CGT: Augmented Democratic Deliberation: Can Conversational Agents Boost Deliberation in Social Media?
Rafik Hadfi and Takayuki Ito

SC&CGT: Facility Location With Approval Preferences: Strategyproofness and Fairness
Edith Elkind, Minming Li and Houyu Zhou

2B2-1 Vaibhav Unhelkar

ROBO: State Supervised Steering Function for Sampling-based Kinodynamic Planning
Pranav Atreya and Joydeep Biswas

ROBO: Intention-Aware Navigation in Crowds with Extended-Space POMDP Planning
Himanshu Gupta, Bradley Hayes and Zachary Sunberg

ROBO: A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled Environments
Kenneth Bogert and Prashant Doshi

ROBO: Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement Learning
Aya Hussein, Eleni Petraki, Sondoss Elsawah and Hussein A. Abbass

2B2-2 Dengji Zhao

MA&NCGT: Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic Assignment
Devansh Jalota, Kiril Solovey, Matthew Tsao, Stephen Zoepf and Marco Pavone

MA&NCGT: The Generalized Magician Problem under Unknown Distributions and Related Applications
Aravind Srinivasan and Pan Xu

MA&NCGT: Strategy-Proof House Allocation with Existing Tenants over Social Networks
Bo You, Ludwig Dierks, Taiki Todo, Minming Li and Makoto Yokoo

JAAMAS, MA&NCGT: Designing Efficient and Fair Mechanisms for Multi-Type Resource Allocation
Xiaoxi Guo, Sujoy Sikdar, Haibin Wang, Lirong Xia, Yongzhi Cao and Hanpin Wang

2B3-1 Rafael Bordini

EMAS: Pippi: Practical Protocol Instantiation
Samuel Christie, Amit Chopra and Munindar Singh

EMAS: FCMNet: Full Communication Memory Net for Team-Level Cooperation in Multi-Agent Systems
Yutong Wang and Guillaume Sartoretti

EMAS: ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative Charging
Yongheng Liang, Hejun Wu and Haitao Wang

EMAS: Testing requirements via User and System Stories in Agent Systems
Sebastian Rodriguez, John Thangarajah, Michael Winikoff and Dhirendra Singh

2B3-2 Karthik Abinav Sankararaman

LEARN: A Mean Field Game Model of Spatial Evolutionary Games
Vincent Hsiao and Dana Nau

LEARN: Multi-Agent Curricula and Emergent Implicit Signaling
Niko Grupen, Daniel Lee and Bart Selman

LEARN: Scaling Mean Field Games with Online Mirror Descent
Julien Pérolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls and Olivier Pietquin

LEARN: Individual-Level Inverse Reinforcement Learning for Mean Field Games
Yang Chen, Libo Zhang, Jiamou Liu and Shuyue Hu

2B4-1 Samarth Swarup

COIN, KRRP: Quantitative Group Trust: A Two-Stage Verification Approach
Jamal Bentahar, Nagat Drawel and Abdeladim Sadiki

KRRP: Controller Synthesis for Omega-Regular and Steady-State Specifications
Alvaro Velasquez, Ismail Alkhouri, Andre Beckus, Ashutosh Trivedi and George Atia

KRRP: Multi-Agent Path Finding for Precedence-Constrained Goal Sequences
Han Zhang, Jingkai Chen, Jiaoyang Li, Brian Williams and Sven Koenig

JAAMAS, COIN, KRRP: Enabling BDI group plans with coordination middleware
Stephen Cranefield

2B4-2 Toshiharu Sugawara

SIM: Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic Response
George Li, Ann Li, Madhav Marathe, Aravind Srinivasan, Leonidas Tsepenekas and Anil Kumar Vullikanti

SIM: Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map Data
Mitsuteru Abe, Fabio Tanaka, Jair Pereira Junior, Anna Bogdanova, Tetsuya Sakurai and Claus Aranha

SIM: Residual Entropy-based Graph Generative Algorithms
Wencong Liu, Jiamou Liu, Zijian Zhang, Yiwei Liu and Liehuang Zhu

JAAMAS, SIM, EMAS: Automatic calibration framework of agent-based models for dynamic and heterogeneous parameters
Dongjun Kim, Tae-Sub Yun, Il-Chul Moon and Jang Won Bae

2B5-1 Adrian Haret

KRRP: Betweenness Centrality in Multi-Agent Path Finding
Eric Ewing, Jingyao Ren, Dhvani Kansara, Vikraman Sathiyanarayanan and Nora Ayanian

KRRP: Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss
Mustafa O. Karabag, Cyrus Neary and Ufuk Topcu

KRRP: Epistemic Reasoning in Jason
Michael Vezina and Babak Esfandiari

JAAMAS, KRRP, MA&NCGT: Concurrent Negotiations with Global Utility Functions
Yasser Mohammad and Shinji Nakadai

AAMAS DAY 2, SLOT C, May 12th, 19.00-22.00 NZST (UTC +12)

2C1-1 Long Tran-Thanh

MA&NCGT: Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting
Matteo Castiglioni, Alberto Marchesi and Nicola Gatti

MA&NCGT: On Parameterized Complexity of Binary Networked Public Goods Game
Arnab Maiti and Palash Dey

MA&NCGT: The spoofing resistance of frequent call markets
Buhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li, Leslie Kanthan, Fan Wu and Michail Basios

MA&NCGT: The Competition and Inefficiency in Urban Road Last-Mile Delivery
Keyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan and Panagiotis Angeloudis

2C1-2 Pallavi Jain

SC&CGT: Computation and Bribery of Voting Power in Delegative Simple Games
Gianlorenzo D’Angelo, Esmaeil Delfaraz and Hugo Gilbert

SC&CGT: Equilibrium Computation For Knockout Tournaments Played By Groups
Grzegorz Lisowski, Ramanujan Maadapuzhi-Sridharan and Paolo Turrini

SC&CGT: How Hard is Bribery in Elections with Randomly Selected Voters
Liangde Tao, Lin Chen, Lei Xu, Weidong Shi, Ahmed Sunny and Md Mahabub Uz Zaman

SC&CGT: Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based Quotas
Santhini K. A., Govind S. Sankar and Meghana Nasre

2C1-3 Paul Harrenstein

SC&CGT: Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes
Felix Brandt, Patrick Lederer and René Romen

SC&CGT: The Price of Majority Support
Robin Fritsch and Roger Wattenhofer

SC&CGT: A Graph-Based Algorithm for the Automated Justification of Collective Decisions
Oliviero Nardi, Arthur Boixel and Ulle Endriss

SC&CGT: Simulating Multiwinner Voting Rules in Judgment Aggregation
Julian Chingoma, Ulle Endriss and Ronald de Haan

2C2-1 Elizabeth Sklar

ROBO: Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning
Naman Shah and Siddharth Srivastava

ROBO: Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation
Tarik Kelestemur, Robert Platt and Taskin Padir

BLUE SKY, ROBO: The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts
Amanda Prorok, Jan Blumenkamp, Qingbiao Li, Ryan Kortvelesy, Zhe Liu and Ethan Stump

ROBO: Standby-Based Deadlock Avoidance Method for Multi-Agent Pickup and Delivery Tasks
Tomoki Yamauchi, Yuki Miyashita and Toshiharu Sugawara

2C2-2 Fernando P. Santos

LEARN: Best-Response Bayesian Reinforcement Learning with BA-POMDPs for Centaurs
Mustafa Mert Çelikok, Frans A. Oliehoek and Samuel Kaski

LEARN: Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive Reasoning
Saar Cohen and Noa Agmon

JAAMAS, LEARN: Goal-driven Active Reinforcement Learning with Human Teachers
Nicolas Bougie and Ryutaro Ichise

2C2-3 Chongjie Zhang

LEARN: How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents
Miguel Vasco, Hang Yin, Francisco S. Melo and Ana Paiva

LEARN: Lazy-MDPs: Towards Interpretable RL by Learning When to Act
Alexis Jacq, Johan Ferret, Olivier Pietquin and Matthieu Geist

LEARN: Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning
Wanqi Xue, Wei Qiu, Bo An, Zinovi Rabinovich, Svetlana Obraztsova and Chai Kiat Yeo

LEARN: Robust Learning from Observation with Model Misspecification
Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy and Adrian Weller

2C3-1 Paolo Turrini

SC&CGT: How Hard is Safe Bribery?
Neel Karia, Faraaz Mallick and Palash Dey

SC&CGT: Multiagent Model-based Credit Assignment for Continuous Control
Dongge Han, Chris Xiaoxuan Lu, Tomasz Michalak and Michael Wooldridge

BLUE SKY, SC&CGT: Augmented Democratic Deliberation: Can Conversational Agents Boost Deliberation in Social Media?
Rafik Hadfi and Takayuki Ito

JAAMAS, SC&CGT: Reaching Consensus Under a Deadline
Marina Bánnikova, Lihi Dery, Svetlana Obraztsova, Zinovi Rabinovich and Jeffrey S. Rosenschein

2C3-2 Jan Maly

KRRP: Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph Pruning
Matej Husár, Jiří Švancara, Philipp Obermeier, Roman Barták and Torsten Schaub

KRRP: Graphical Representation Enhances Human Compliance with Principles for Graded Argumentation Semantics
Srdjan Vesic, Bruno Yun and Predrag Teovanovic

KRRP: Online Collective Multiagent Planning by Offline Policy Reuse with Applications to City-Scale Mobility-on-Demand Systems
Wanyuan Wang, Gerong Wu, Weiwei Wu, Yichuan Jiang and Bo An

KRRP: Logical Theories of Collective Attitudes and the Belief Base Perspective
Emiliano Lorini and Eloan Rapion

2C3-3 Alberto Castellini

KRRP: CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces
Keisuke Okumura, Ryo Yonetani, Mai Nishimura and Asako Kanezaki

KRRP, LEARN: Learning Heuristics for Combinatorial Assignment by Optimally Solving Subproblems
Fredrik Präntare, Herman Appelgren, Mattias Tiger, David Bergström and Fredrik Heintz

KRRP: BOID*: Autonomous goal deliberation through abduction
Stipe Pandzic, Jan Broersen and Henk Aarts

KRRP: Multiagent Dynamics of Gradual Argumentation Semantics
Louise Dupuis, Elise Bonzon and Nicolas Maudet

2C4-1 Laurent Perrussel

KRRP: Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal Planning
Yaniel Carreno, Jun Hao Alvin Ng, Yvan Petillot and Ron Petrick

KRRP: Socially supervised representation learning: the role of subjectivity in learning efficient representations
Julius Taylor, Eleni Nisioti and Clément Moulin-Frier

KRRP: Negotiated Path Planning for Non-Cooperative Multi-Robot Systems
Anna Gautier, Alex Stephens, Bruno Lacerda, Nick Hawes and Michael Wooldridge

KRRP: Preference-Based Goal Refinement in BDI Agents
Mostafa Mohajeriparizi, Giovanni Sileno and Tom Van Engers

2C4-2 Ann Nowe

HUM: Warmth and Competence in Human-Agent Cooperation
Kevin McKee, Xuechunzi Bai and Susan Fiske

APP: Deep Reinforcement Learning for Active Wake Control
Grigory Neustroev, Sytze P.E. Andringa, Remco A. Verzijlbergh and Mathijs M. De Weerdt

LEARN: Spiking Pitch Black: Poisoning an Unknown Environment to Attack Unknown Reinforcement Learners
Hang Xu, Xinghua Qu and Zinovi Rabinovich

2C5-1 Alessandro Ricci

LEARN: SIDE: State Inference for Partially Observable Cooperative Multi-Agent Reinforcement Learning
Zhiwei Xu, Yunpeng Bai, Dapeng Li, Bin Zhang and Guoliang Fan

LEARN: Budgeted Combinatorial Multi-Armed Bandits
Debojit Das, Shweta Jain and Sujit Gujar

LEARN: Pareto Conditioned Networks
Mathieu Reymond, Eugenio Bargiacchi and Ann Nowé

LEARN: MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning
Markus Peschl, Arkady Zgonnikov, Frans Oliehoek and Luciano Siebert

2C5-2 Neil Yorke-Smith

SIM: Knowledge Transmission and Improvement Across Generations do not Need Strong Selection
Yasser Bourahla, Manuel Atencia and Jérôme Euzenat

SIM: Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map Data
Mitsuteru Abe, Fabio Tanaka, Jair Pereira Junior, Anna Bogdanova, Tetsuya Sakurai and Claus Aranha

SIM: Asynchronous Opinion Dynamics in Social Networks
Petra Berenbrink, Martin Hoefer, Dominik Kaaser, Pascal Lenzner, Malin Rau and Daniel Schmand

HUM: Long-Term Resource Allocation Fairness in  Average Markov Decision Process (AMDP) Environment
Ganesh Ghalme, Vineet Nair, Vishakha Patil and Yilun Zhou

2C5-3 Emma Norling

LEARN: Emergent Cooperation from Mutual Acknowledgment Exchange
Thomy Phan, Felix Sommer, Philipp Altmann, Fabian Ritz, Lenz Belzner and Claudia Linnhoff-Popien

HUM: COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer Patient
Amine Benamara, Jean-Claude Martin, Elise Prigent, Laurence Chaby, Mohamed Chetouani, Jean Zagdoun, Hélène Vanderstichel, Sébastien Dacunha and Brian Ravenet

SIM: Residual Entropy-based Graph Generative Algorithms
Wencong Liu, Jiamou Liu, Zijian Zhang, Yiwei Liu and Liehuang Zhu

ROBO: Refined Hardness of Distance-Optimal Multi-Agent Path Finding
Tzvika Geft and Dan Halperin

AAMAS DAY 2, SLOT A, May 13th, 4.00-7.00 NZST (UTC +12)

2A1-1 Arpita Biswas

SC&CGT: Computation and Bribery of Voting Power in Delegative Simple Games
Gianlorenzo D’Angelo, Esmaeil Delfaraz and Hugo Gilbert

SC&CGT: The Price of Majority Support
Robin Fritsch and Roger Wattenhofer

JAAMAS, SC&CGT: Reaching Consensus Under a Deadline
Marina Bánnikova, Lihi Dery, Svetlana Obraztsova, Zinovi Rabinovich and Jeffrey S. Rosenschein

SC&CGT: Tracking Truth by Weighting Proxies in Liquid Democracy
Yuzhe Zhang and Davide Grossi

2A1-2 Alan Tsang

BLUE SKY, SC&CGT: Foundations for the Grassroots Formation of a Democratic Metaverse
Ehud Shapiro and Nimrod Talmon

SC&CGT: Simulating Multiwinner Voting Rules in Judgment Aggregation
Julian Chingoma, Ulle Endriss and Ronald de Haan

SC&CGT: Welfare vs. Representation in Participatory Budgeting
Roy Fairstein, Dan Vilenchik, Reshef Meir and Kobi Gal

SC&CGT: Selecting PhD Students and Projects with Limited Funding
Jatin Jindal, Jérôme Lang, Katarína Cechlárová and Julien Lesca

2A1-3 Nisarg Shah

SC&CGT: Equilibrium Computation For Knockout Tournaments Played By Groups
Grzegorz Lisowski, Ramanujan Maadapuzhi-Sridharan and Paolo Turrini

SC&CGT: Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes
Felix Brandt, Patrick Lederer and René Romen

SC&CGT: How Hard is Safe Bribery?
Neel Karia, Faraaz Mallick and Palash Dey

SC&CGT: Computing Nash Equilibria for District-based Nominations
Paul Harrenstein and Paolo Turrini

2A2-1 Sandip Sen

HUM: Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents
Peizhu Qian and Vaibhav Unhelkar

HUM: Multimodal analysis of the predictability of hand-gesture properties
Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström and Gustav Eje Henter

HUM: Factorial Agent Markov Model: Modeling Other Agents’ Behavior in presence of Dynamic Latent Decision Factors
Liubove Orlov-Savko, Abhinav Jain, Gregory Gremillion, Catherine Neubauer, Jonroy Canady and Vaibhav Unhelkar

HUM: Towards Pluralistic Value Alignment: Aggregating Value Systems Through Lp-Regression
Roger Lera-Leri, Filippo Bistaffa, Marc Serramia, Maite Lopez-Sanchez and Juan Rodriguez-Aguilar

2A2-2 Duncan McElfresh

HUM: Long-Term Resource Allocation Fairness in  Average Markov Decision Process (AMDP) Environment
Ganesh Ghalme, Vineet Nair, Vishakha Patil and Yilun Zhou

BLUE SKY, ROBO: Robots Teaching Humans: A New Communication Paradigm via Reverse Teleoperation
Rika Antonova and Ankur Handa

HUM: Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming
Aaquib Tabrez, Matthew B. Luebbers and Bradley Hayes

HUM: Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction
Sharadhi Alape Suryanarayana, David Sarne and Sarit Kraus

2A2-3 Beatrice Biancardi

HUM: Coaching agent: making recommendations for behavior change. A case study on improving eating habits
Jules Vandeputte, Antoine Cornuéjols, Nicolas Darcel, Fabien Delaere and Christine Martin

HUM: Warmth and Competence in Human-Agent Cooperation
Kevin McKee, Xuechunzi Bai and Susan Fiske

HUM: COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer Patient
Amine Benamara, Jean-Claude Martin, Elise Prigent, Laurence Chaby, Mohamed Chetouani, Jean Zagdoun, Hélène Vanderstichel, Sébastien Dacunha and Brian Ravenet

HUM: Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning
Parand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte and Sheila A. McIlraith

2A3-1 Jivko Sinapov

LEARN: Any-Play: an Intrinsic Augmentation for Zero-Shot Coordination
Keane Lucas and Ross Allen

LEARN: Budgeted Combinatorial Multi-Armed Bandits
Debojit Das, Shweta Jain and Sujit Gujar

LEARN: Evaluating Strategy Exploration in Empirical Game-Theoretic Analysis
Yongzhao Wang, Qiurui Ma and Michael Wellman

LEARN: D3C: Reducing the Price of Anarchy in Multi-Agent Learning
Ian Gemp, Kevin McKee, Richard Everett, Edgar Duenez-Guzman, Yoram Bachrach, David Balduzzi and Andrea Tacchetti

2A3-2 Martin Hoefer

MA&NCGT: On Parameterized Complexity of Binary Networked Public Goods Game
Arnab Maiti and Palash Dey

MA&NCGT: Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic Assignment
Devansh Jalota, Kiril Solovey, Matthew Tsao, Stephen Zoepf and Marco Pavone

MA&NCGT: Corruption in Auctions: Social Welfare Loss in Hybrid Multi-Unit Auctions
Andries van Beek, Ruben Brokkelkamp and Guido Schaefer

BLUE SKY, SC&CGT: Foundations for the Grassroots Formation of a Democratic Metaverse
Ehud Shapiro and Nimrod Talmon

2A3-3 Piotr Faliszewski

MA&NCGT: The Generalized Magician Problem under Unknown Distributions and Related Applications
Aravind Srinivasan and Pan Xu

MA&NCGT: Automated Configuration and Usage of Strategy Portfolios for Bargaining
Bram Renting, Holger Hoos and Catholijn Jonker

MA&NCGT: The spoofing resistance of frequent call markets
Buhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li, Leslie Kanthan, Fan Wu and Michail Basios

JAAMAS, MA&NCGT: Combining quantitative and qualitative reasoning in concurrent multi-player games
Nils Bulling and Valentin Goranko

2A4-1 Paul Harrenstein

SC&CGT: Equilibria in Schelling Games: Computational Hardness and Robustness
Luca Kreisel, Niclas Boehmer, Vincent Froese and Rolf Niedermeier

COIN, KRRP: Quantitative Group Trust: A Two-Stage Verification Approach
Jamal Bentahar, Nagat Drawel and Abdeladim Sadiki

HUM: Building contrastive explanations for multi-agent team formation
Athina Georgara, Juan Antonio Rodriguez Aguilar and Carles Sierra

2A4-2 Hadi Hosseini

SC&CGT: Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous Preferences
Niclas Boehmer, Markus Brill and Ulrike Schmidt-Kraepelin

MA&NCGT: The Competition and Inefficiency in Urban Road Last-Mile Delivery
Keyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan and Panagiotis Angeloudis

SC&CGT: Beyond Cake Cutting: Allocating Homogeneous Divisible Goods
Ioannis Caragiannis, Vasilis Gkatzelis, Alexandros Psomas and Daniel Schoepflin

BLUE SKY, SC&CGT: Social Choice Around the Block: On the Computational Social Choice of Blockchain
Davide Grossi

2A4-3 Amy Greenwald

LEARN: Best-Response Bayesian Reinforcement Learning with BA-POMDPs for Centaurs
Mustafa Mert Çelikok, Frans A. Oliehoek and Samuel Kaski

LEARN: Lazy-MDPs: Towards Interpretable RL by Learning When to Act
Alexis Jacq, Johan Ferret, Olivier Pietquin and Matthieu Geist

LEARN: Centralized Model and Exploration Policy for Multi-Agent RL
Qizhen Zhang, Chris Lu, Animesh Garg and Jakob Foerster

2A5-1 Georgios Anagnostopoulos

KRRP, LEARN: Learning Heuristics for Combinatorial Assignment by Optimally Solving Subproblems
Fredrik Präntare, Herman Appelgren, Mattias Tiger, David Bergström and Fredrik Heintz

KRRP: Negotiated Path Planning for Non-Cooperative Multi-Robot Systems
Anna Gautier, Alex Stephens, Bruno Lacerda, Nick Hawes and Michael Wooldridge

KRRP: A Symbolic Representation for Probabilistic Dynamic Epistemic Logic
Sébastien Gamblin, Alexandre Niveau and Maroua Bouzid

KRRP: A Declarative Framework for Maximal k-plex Enumeration Problems
Said Jabbour, Nizar Mhadhbi, Badran Raddaoui and Lakhdar Sais

2A5-2 Alberto Castellini

LEARN: Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning
Jiachen Yang, Ethan Wang, Rakshit Trivedi, Tuo Zhao and Hongyuan Zha

LEARN: A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms
Shangtong Zhang, Romain Laroche, Harm van Seijen, Shimon Whiteson and Remi Tachet des Combes

LEARN: Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive Reasoning
Saar Cohen and Noa Agmon

LEARN: Concave Utility Reinforcement Learning: the Mean-field Game viewpoint
Matthieu Geist, Julien Pérolat, Mathieu Laurière, Romuald Elie, Sarah Perrin, Oliver Bachem, Rémi Munos and Olivier Pietquin

2A5-3 Maria Gini

ROBO: Coordinated Multi-Agent Path Finding for Drones and Trucks over Road Networks
Shushman Choudhury, Kiril Solovey, Mykel J. Kochenderfer and Marco Pavone

ROBO: Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation
Tarik Kelestemur, Robert Platt and Taskin Padir

ROBO: Intention-Aware Navigation in Crowds with Extended-Space POMDP Planning
Himanshu Gupta, Bradley Hayes and Zachary Sunberg

2A6-1 Sven Koenig

ROBO: Context-Aware Modelling for Multi-Robot Systems Under Uncertainty
Charlie Street, Bruno Lacerda, Michal Staniaszek, Manuel Mühlig and Nick Hawes

ROBO: A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled Environments
Kenneth Bogert and Prashant Doshi

BLUE SKY, ROBO: The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts
Amanda Prorok, Jan Blumenkamp, Qingbiao Li, Ryan Kortvelesy, Zhe Liu and Ethan Stump

2A6-2 Viviana Mascardi

JAAMAS, SC&CGT: Voting with Random Classifiers in Ensembles (VORACE)
Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini and Francesca Rossi
HUM: Group fairness in bandit arm selection
Candice Schumann, Zhi Lang, Nicholas Mattei and John P. Dickerson

APP: Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality
Deepeka Garg, Maria Chli and George Vogiatzis

COIN: GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning
Jingqing Ruan, Yali Du, Xuantang Xiong, Dengpeng Xing, Xiyun Li, Linghui Meng, Haifeng Zhang, Jun Wang and Bo Xu

2A6-3 Jaime Sichman

HUM: Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions
Tom Bewley and Freddy LeCue

BLUE SKY, COIN: Macro Ethics for Governing Equitable Sociotechnical Systems
Jessica Woodgate and Nirav Ajmeri

BLUE SKY, APP: “Go to the Children”: Rethinking to Intelligent Agent Design and Programming in a Developmental Learning Perspective
Alessandro Ricci

JAAMAS, HUM: Trust repair in human-agent teams: the effectiveness ofexplanations and expressing regret
Esther Kox, José Kerstholt, Tom Hueting and Peter de Vries

2A7-1 Joseph Giampapa

COIN: Ensemble and Incremental Learning for Norm Violation Detection
Thiago Freitas Dos Santos, Nardine Osman and Marco Schorlemmer

HUM: CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning Agents
Joe McCalmon, Thai Le, Sarra Alqahtani and Dongwon Lee

ROBO: Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation
Adrian Simon Bauer, Anne Köpken and Daniel Leidner

COIN: Learning Efficient Diverse Communication for Cooperative Heterogeneous Teaming
Esmaeil Seraj, Zheyuan Wang, Rohan Paleja, Daniel Martin, Matthew Sklar, Anirudh Patel and Matthew Gombolay

2A7-2 Nirav Ajmeri

SIM: Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic Response
George Li, Ann Li, Madhav Marathe, Aravind Srinivasan, Leonidas Tsepenekas and Anil Kumar Vullikanti

HUM: Explainability in Multi-Agent Path/Motion Planning: User-study-driven taxonomy and requirements
Martim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff and Amanda Coles

KRRP: Controller Synthesis for Omega-Regular and Steady-State Specifications
Alvaro Velasquez, Ismail Alkhouri, Andre Beckus, Ashutosh Trivedi and George Atia

COIN: A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large Settings
Panayiotis Danassis, Aleksei Triastcyn and Boi Faltings

AAMAS DAY 3, SLOT B, May 13th, 12.00-14.00 NZST (UTC +12)

3B1-1 Yasser Mohammad

LEARN: REMAX: Relational Representation for Multi-Agent Exploration
Heechang Ryu, Hayong Shin and Jinkyoo Park

LEARN: Centralized Model and Exploration Policy for Multi-Agent RL
Qizhen Zhang, Chris Lu, Animesh Garg and Jakob Foerster

LEARN: Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning
Jiachen Yang, Ethan Wang, Rakshit Trivedi, Tuo Zhao and Hongyuan Zha

LEARN: Learning Theory of Mind via Dynamic Traits Attribution
Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh and Truyen Tran

3B1-2 Guangliang Li

LEARN: Off-Policy Evolutionary Reinforcement Learning with Maximum Mutations
Karush Suri

LEARN: Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning
Baicen Xiao, Bhaskar Ramasubramanian and Radha Poovendran

LEARN: ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments
Yash Shukla, Christopher Thierauf, Ramtin Hosseini, Gyan Tatiya and Jivko Sinapov

LEARN: A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms
Shangtong Zhang, Romain Laroche, Harm van Seijen, Shimon Whiteson and Remi Tachet des Combes

3B2-1 Takayuki Ito

APP, SC&CGT, MA&NCGT: Revenue and User Traffic Maximization in Mobile Short-Video Advertising
Dezhi Ran, Weiqiang Zheng, Yunqi Li, Kaigui Bian, Jie Zhang and Xiaotie Deng

MA&NCGT: Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent
Ian Gemp, Rahul Savani, Marc Lanctot, Yoram Bachrach, Thomas Anthony, Richard Everett, Andrea Tacchetti, Tom Eccles and Janos Kramar

KRRP, MA&NCGT: Reasoning about Human-Friendly Strategies in Repeated Keyword Auctions
Francesco Belardinelli, Wojtek Jamroga, Vadim Malvone, Munyque Mittelmann, Aniello Murano and Laurent Perrussel

MA&NCGT: Anti-Malware Sandbox Games
Sujoy Sikdar, Sikai Ruan, Qishen Han, Paween Pitimanaaree, Jeremy Blackthorne, Bulent Yener and Lirong Xia

3B2-2 Paul Scott

MA&NCGT: Incentives to Invite Others to Form Larger Coalitions
Yao Zhang and Dengji Zhao

APP, MA&NCGT: Networked Restless Multi-Armed Bandits for Mobile Interventions
Han Ching Ou, Christoph Siebenbrunner, Jackson Killian, Meredith B Brooks, David Kempe, Yevgeniy Vorobeychik and Milind Tambe

ROBO: Standby-Based Deadlock Avoidance Method for Multi-Agent Pickup and Delivery Tasks
Tomoki Yamauchi, Yuki Miyashita and Toshiharu Sugawara

MA&NCGT: Robust No-Regret Learning in Min-Max Stackelberg Games
Denizalp Goktas, Jiayi Zhao and Amy Greenwald

3B3-1 Minming Li

HUM: Group fairness in bandit arm selection
Candice Schumann, Zhi Lang, Nicholas Mattei and John P. Dickerson
SC&CGT: How to Fairly Allocate Easy and Difficult Chores
Soroush Ebadian, Dominik Peters and Nisarg Shah

SC&CGT: Fair Stable Matching Meets Correlated Preferences
Angelina Brilliantova and Hadi Hosseini

KRRP, SC&CGT: Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems
Aditya Mate, Arpita Biswas, Christoph Siebenbrunner, Susobhan Ghosh and Milind Tambe

3B3-2 Karthik Abinav Sankararaman

LEARN: Evaluating Strategy Exploration in Empirical Game-Theoretic Analysis
Yongzhao Wang, Qiurui Ma and Michael Wellman

LEARN: Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning
Wanqi Xue, Wei Qiu, Bo An, Zinovi Rabinovich, Svetlana Obraztsova and Chai Kiat Yeo

LEARN: Unbiased Asymmetric Reinforcement Learning under Partial Observability
Andrea Baisero and Christopher Amato

LEARN: BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs
Sammie Katt, Hai Nguyen, Frans Oliehoek and Christopher Amato

3B4-1 Guangliang Li

LEARN: Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments
Axel Browne and Andrew Forney
SIM: Properties of Reputation Lag Attack Strategies
Sean Sirur and Tim Muller

SIM: Cascades and Overexposure in Social Networks: The Budgeted Case
Mohammad Irfan, Kim Hancock and Laura Friel

BLUE SKY, APP: Agent-Assisted Life-Long Education and Learning
Tomas Trescak, Roger Lera Leri, Filippo Bistaffa and Juan Antonio Rodriguez Aguilar

3B4-2 Duncan McElfresh

HUM: Sympathy based Reinforcement Learning agents
Manisha Senadeera, Thommen George Karimpanal, Santu Rana and Sunil Gupta

HUM: Factorial Agent Markov Model: Modeling Other Agents’ Behavior in presence of Dynamic Latent Decision Factors
Liubove Orlov-Savko, Abhinav Jain, Gregory Gremillion, Catherine Neubauer, Jonroy Canady and Vaibhav Unhelkar

APP: Hierarchical Value Decomposition for Effective On-demand Ride-Pooling
Jiang Hao and Pradeep Varakantham

3B5-1 Adrian Haret

SC&CGT: Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population Distribution
Allan Borodin, Omer Lev, Nisarg Shah and Tyrone Strangway

SC&CGT: Position-based Matching with Multi-Modal Preferences
Yinghui Wen, Aizhong Zhou and Jiong Guo

SC&CGT: Efficient Approximation Algorithms for the Inverse Semivalue Problem
Ilias Diakonikolas, Chrystalla Pavlou, John Peebles and Alistair Stewart

3B5-2 Yasser Mohammad

LEARN: D3C: Reducing the Price of Anarchy in Multi-Agent Learning
Ian Gemp, Kevin McKee, Richard Everett, Edgar Duenez-Guzman, Yoram Bachrach, David Balduzzi and Andrea Tacchetti

LEARN: Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning
Seung Hyun Kim, Neale Van Stralen, Girish Chowdhary and Huy T. Tran

BLUE SKY, ROBO: Robots Teaching Humans: A New Communication Paradigm via Reverse Teleoperation
Rika Antonova and Ankur Handa

JAAMAS, LEARN: Goal-driven Active Reinforcement Learning with Human Teachers
Nicolas Bougie and Ryutaro Ichise

3B6-1 Mohammad Hasan

COIN, SIM: Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against It
Ashay Aswale, Antonio Lopez, Aukkawut Ammartayakun and Carlo Pinciroli

HUM: Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning
Parand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte and Sheila A. McIlraith

HUM: Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming
Aaquib Tabrez, Matthew B. Luebbers and Bradley Hayes

LEARN: Multi-Objective Reinforcement Learning with Non-Linear Scalarization
Mridul Agarwal, Vaneet Aggarwal and Tian Lan

3B6-2 Mark Reynolds

LEARN: Characterizing Attacks on Deep Reinforcement Learning
Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li and Dawn Song

LEARN: Lyapunov Exponents for Diversity in Differentiable Games
Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz and Jakob Foerster

LEARN: Spiking Pitch Black: Poisoning an Unknown Environment to Attack Unknown Reinforcement Learners
Hang Xu, Xinghua Qu and Zinovi Rabinovich

LEARN: Anomaly Guided Policy Learning from Imperfect Demonstrations
Zi-Xuan Chen, Xin-Qiang Cai, Yuan Jiang and Zhi-Hua Zhou

AAMAS DAY 3, SLOT C, May 13th, 19.00-21.00 NZST (UTC +12)

3C1-1 Pieter Libin

LEARN: Poincaré-Bendixson Limit Sets in Multi-Agent Learning
Aleksander Czechowski and Georgios Piliouras

LEARN: The Dynamics of Q-learning in Population Games: a Physics-inspired Continuity Equation Model
Shuyue Hu, Chin-Wing Leung, Ho-Fung Leung and Harold Soh

LEARN: Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSRO
Paul Muller, Mark Rowland, Romuald Elie, Georgios Piliouras, Julien Perolat, Mathieu Lauriere, Raphael Marinier, Olivier Pietquin and Karl Tuyls

LEARN: Concave Utility Reinforcement Learning: the Mean-field Game viewpoint
Matthieu Geist, Julien Pérolat, Mathieu Laurière, Romuald Elie, Sarah Perrin, Oliver Bachem, Rémi Munos and Olivier Pietquin

3C1-2 Chongjie Zhang

BLUE SKY, LEARN: Towards Anomaly Detection in Reinforcement Learning
Robert Müller, Steffen Illium, Thomy Phan, Tom Haider and Claudia Linnhoff-Popien

LEARN: Learning to Transfer Role Assignment Across Team Sizes
Dung Nguyen, Phuoc Nguyen, Svetha Venkatesh and Truyen Tran

LEARN: Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration
Lukas Schäfer, Filippos Christianos, Josiah P. Hanna and Stefano V. Albrecht

LEARN: SIDE: State Inference for Partially Observable Cooperative Multi-Agent Reinforcement Learning
Zhiwei Xu, Yunpeng Bai, Dapeng Li, Bin Zhang and Guoliang Fan

3C2-1 Diodato Ferraioli

SC&CGT: Fair and Truthful Mechanism with Limited Subsidy
Hiromichi Goko, Ayumi Igarashi, Yasushi Kawase, Kazuhisa Makino, Hanna Sumita, Akihisa Tamura, Yu Yokoi and Makoto Yokoo

MA&NCGT: A path-following polynomial equations systems approach for computing Nash equilibria
Hélène Fargier, Paul Jourdan and Régis Sabbadin

SC&CGT: Pareto optimal and popular house allocation with lower and upper quotas
Ágnes Cseh, Tobias Friedrich and Jannik Peters

MA&NCGT: Strategy-Proof House Allocation with Existing Tenants over Social Networks
Bo You, Ludwig Dierks, Taiki Todo, Minming Li and Makoto Yokoo

3C2-2 Paolo Turrini

SC&CGT: Multivariate Algorithmics for Eliminating Envy by Donating Goods
Niclas Boehmer, Robert Bredereck, Klaus Heeger, Dušan Knop and Junjie Luo

SC&CGT: Welfare vs. Representation in Participatory Budgeting
Roy Fairstein, Dan Vilenchik, Reshef Meir and Kobi Gal

MA&NCGT: One-Sided Matching Markets with Endowments: Equilibria and Algorithms
Jugal Garg, Thorben Tröbst and Vijay Vazirani

JAAMAS, MA&NCGT: Designing Efficient and Fair Mechanisms for Multi-Type Resource Allocation
Xiaoxi Guo, Sujoy Sikdar, Haibin Wang, Lirong Xia, Yongzhi Cao and Hanpin Wang

3C3-1 Davide Grossi

SC&CGT: Computing Nash Equilibria for District-based Nominations
Paul Harrenstein and Paolo Turrini

SC&CGT: Selecting PhD Students and Projects with Limited Funding
Jatin Jindal, Jérôme Lang, Katarína Cechlárová and Julien Lesca

SC&CGT: Facility Location With Approval Preferences: Strategyproofness and Fairness
Edith Elkind, Minming Li and Houyu Zhou

SC&CGT: Computing Balanced Solutions for Large International Kidney Exchange Schemes
Marton Benedek, Peter Biro, Walter Kern and Daniel Paulusma

3C3-2 Patrick Lederer

SC&CGT: Coalition Formation Games and Social Ranking Solutions
Roberto Lucchetti, Stefano Moretti and Tommaso Rea

SIM, SC&CGT: Cooperation and learning dynamics under risk diversity and financial incentives
Ramona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed Chetouani and Francisco C. Santos

JAAMAS, SC&CGT: Towards addressing dynamic multi-agent task allocation in law enforcement
Itshak Tkach and Sofia Amador Nelke

JAAMAS, SC&CGT: Voting with Random Classifiers in Ensembles (VORACE)
Cristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini and Francesca Rossi

3C4-1 Roel Boumans

HUM: Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards Individualized and Explainable Robotic Support in Everyday Activities
Alexander Wich, Holger Schultheis and Michael Beetz

ROBO: Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement Learning
Aya Hussein, Eleni Petraki, Sondoss Elsawah and Hussein A. Abbass

KRRP: A Symbolic Representation for Probabilistic Dynamic Epistemic Logic
Sébastien Gamblin, Alexandre Niveau and Maroua Bouzid

KRRP: Multi-Agent Path Finding for Precedence-Constrained Goal Sequences
Han Zhang, Jingkai Chen, Jiaoyang Li, Brian Williams and Sven Koenig

3C4-2 Minh Kieu

SIM: Agent-based modeling and simulation for malware spreading in D2D networks
Ziyad Benomar, Chaima Ghribi, Elie Cali, Alexander Hinsen and Benedikt Jahnel

SIM: Segregation in social networks of heterogeneous agents acting under incomplete information
D. Kai Zhang and Alexander Carver

JAAMAS, COIN, KRRP: Enabling BDI group plans with coordination middleware
Stephen Cranefield

JAAMAS, SIM, EMAS: Automatic calibration framework of agent-based models for dynamic and heterogeneous parameters
Dongjun Kim, Tae-Sub Yun, Il-Chul Moon and Jang Won Bae

3C5-1 Vincent Corruble

LEARN: Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement Learning
Milad Kazemi, Mateo Perez, Fabio Somenzi, Sadegh Soudjani, Ashutosh Trivedi and Alvaro Velasquez

LEARN: Learning Theory of Mind via Dynamic Traits Attribution
Dung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh and Truyen Tran

LEARN: Robust Learning from Observation with Model Misspecification
Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy and Adrian Weller

LEARN: Anomaly Guided Policy Learning from Imperfect Demonstrations
Zi-Xuan Chen, Xin-Qiang Cai, Yuan Jiang and Zhi-Hua Zhou

3C5-2 Badran Raddaoui

EMAS: Testing requirements via User and System Stories in Agent Systems
Sebastian Rodriguez, John Thangarajah, Michael Winikoff and Dhirendra Singh

HUM: Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction
Sharadhi Alape Suryanarayana, David Sarne and Sarit Kraus

KRRP: A Declarative Framework for Maximal k-plex Enumeration Problems
Said Jabbour, Nizar Mhadhbi, Badran Raddaoui and Lakhdar Sais

HUM: Towards Pluralistic Value Alignment: Aggregating Value Systems Through Lp-Regression
Roger Lera-Leri, Filippo Bistaffa, Marc Serramia, Maite Lopez-Sanchez and Juan Rodriguez-Aguilar

3C6-2 Pallavi Jain

LEARN: Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments
Axel Browne and Andrew Forney

SC&CGT: Three-Dimensional Popular Matching with Cyclic Preferences
Ágnes Cseh and Jannik Peters

SC&CGT: Tracking Truth by Weighting Proxies in Liquid Democracy
Yuzhe Zhang and Davide Grossi

Schedule of the AAMAS posters and demo papers

 

Authors Title Type

AAMAS DAY 1, SLOT PD1C, May 11th, 22.00–23.00 NZST (UTC +12)

Christos Verginis, Zhe Xu and Ufuk Topcu Non-Parametric Neuro-Adaptive Coordination of Multi-Agent Systems Poster
Al-Hussein Abutaleb and Bruno Yun Chameleon – A Framework for Developing Conversational Agents for Medical Training Purposes Demo
Arno Hartholt, Ed Fast, Andrew Leeds, Kevin Kim, Andrew Gordon, Kyle McCullough, Volkan Ustun and Sharon Mozgai Demonstrating the Rapid Integration & Development Environment (RIDE): Embodied Conversational Agent (ECA) and Multiagent Capabilities Demo
Bruno Fernandes, André Diogo, Fabio Silva, José Neves and Cesar Analide KnowLedger – A Multi-Agent System Blockchain for Smart Cities Data Demo
Bruno Fernandes, Paulo Novais and Cesar Analide A Multi-Agent System for Automated Machine Learning Demo
Jan Buermann, Dimitar Georgiev, Enrico H. Gerding, Lewis Hill, Obaid Malik, Alexandru Pop, Matthew Pun, Sarvapali D. Ramchurn, Elliot Salisbury and Ivan Stojanovic An Agent-Based Simulator for Maritime Transport Decarbonisation Demo
John Harwell, London Lowmanstone and Maria Gini SIERRA: A Modular Framework for Research Automation Demo
Matheus Aparecido Do Carmo Alves, Amokh Varma, Yehia Elkhatib and Leandro Soriano Marcolino AdLeap-MAS: An Open-source Multi-Agent Simulator for Ad-hoc Reasoning Demo
Yinghui Pan, Junhan Chen, Yifeng Zeng, Zhangrui Yao, Qianwen Li, Biyang Ma, Yi Ji and Zhong Ming LBfT: Learning Bayesian Network Structures from Text in Autonomous Typhoon Response Systems Demo
Abdul Rahman Kreidieh, Yibo Zhao, Samyak Parajuli and Alexandre Bayen Learning Generalizable Multi-Lane Mixed Autonomy Control Strategies in Single-Lane Settings Poster
Angelo Ferrando and Rafael C. Cardoso Safety Shields, an Automated Failure Handling Mechanism for BDI Agents Poster
Benjamin Irwin, Antonio Rago and Francesca Toni Argumentative Forecasting Poster
Conor F Hayes, Diederik M. Roijers, Enda Howley and Patrick Mannion Decision-Theoretic Planning for the Expected Scalarised Returns Poster
David Klaška, Antonin Kucera, Vit Musil and Vojtech Rehak Minimizing Expected Intrusion Detection Time in Adversarial Patrolling Poster
Fredrik Präntare, George Osipov and Leif Eriksson Concise Representations and Complexity of Combinatorial Assignment Problems Poster
Giulio Mazzi, Alberto Castellini and Alessandro Farinelli Active Generation of Logical Rules for POMCP Shielding Poster
Halvard Hummel and Magnus Lie Hetland Guaranteeing Half-Maximin Shares Under Cardinality Constraints Poster
Helen Harman and Elizabeth Sklar Multi-agent Task Allocation for Fruit Picker Team Formation Poster
Ilias Kazantzidis, Timothy Norman, Yali Du and Christopher T. Freeman How to train your agent: Active Learning from Human Preferences and Justifications in Safety-critical Environments Poster
Jieting Luo and Mehdi Dastani Modeling Affective Reaction in Multi-agent Systems Poster
Lukasz Mikulski, Wojtek Jamroga and Damian Kurpiewski Towards Assume-Guarantee Verification of Strategic Ability Poster
Markus Ewert, Stefan Heidekrüger and Martin Bichler Approaching the Overbidding Puzzle in All-Pay Auctions: Explaining Human Behavior through Bayesian Optimization and Equilibrium Learning Poster
Niclas Boehmer, Tomohiro Koana and Rolf Niedermeier A Refined Complexity Analysis of Fair Districting over Graphs Poster
Pierre Cardi, Laurent Gourves and Julien Lesca On Fair and Efficient Solutions for Budget Apportionment Poster
Sanjay Chandlekar, Easwar Subramanian, Sanjay Bhat, Praveen Paruchuri and Sujit Gujar Multi-unit Double Auctions: Equilibrium Analysis and Bidding Strategy using DDPG in Smart-grids Poster
Theophile Cabannes, Mathieu Lauriere, Julien Perolat, Raphael Marinier, Sertan Girgin, Sarah Perrin, Olivier Pietquin, Alexandre Bayen, Eric Goubault and Romuald Elie Solving N player dynamic routing games with congestion: a mean field approach Poster
Yehia Abd Alrahman, Shaun Azzopardi and Nir Piterman R-CHECK: A Model Checker for Verifying Reconfigurable MAS Poster
Yongjie Yang On the Complexity of Controlling Amendment and Successive Winners Poster
Yuanzi Zhu and Carmine Ventre Irrational behaviour and globalisation Poster

AAMAS DAY 2, SLOT PD2B, May 12th, 14.00-15.00 NZST (UTC +12)

Biyang Ma, Yinghui Pan, Yifeng Zeng and Zhong Ming Ev-IDID: Enhancing Solutions to Interactive Dynamic Influence Diagrams through Evolutionary Algorithms Demo
Hala Khodr, Barbara Bruno, Aditi Kothiyal and Pierre Dillenbourg Cellulan World: Interactive platform to learn swarm behaviors Demo
Naman Shah, Pulkit Verma, Trevor Angle and Siddharth Srivastava JEDAI: A System for Skill-Aligned Explainable Robot Planning Demo
Aldo Iván Ramírez Abarca and Jan Broersen A Stit Logic of Responsibility Poster
Alvaro Gunawan, Ji Ruan and Xiaowei Huang A Graph Neural Network Reasoner for Game Description Language Poster
Anusha Srikanthan and Harish Ravichandar Resource-Aware Adaptation of Heterogeneous Strategies for Coalition Formation Poster
Gaurav Dixit and Kagan Tumer Behavior Exploration and Team Balancing for Heterogeneous Multiagent Coordination Poster
George Li, Arash Haddadan, Ann Li, Madhav Marathe, Aravind Srinivasan, Anil Kumar Vullikanti and Zeyu Zhao Theoretical Models and Preliminary Results for Contact Tracing and Isolation Poster
Isaac Sheidlower, Elaine Short and Allison Moore Environment Guided Interactive Reinforcement Learning: Learning from Binary Feedback in High-Dimensional Robot Task Environments Poster
Ishika Singh, Gargi Singh and Ashutosh Modi Pre-trained Language Models as Prior Knowledge for Playing Text-based Games Poster
Jaleh Zand, Jack Parker-Holder and Stephen Roberts On-the-fly Strategy Adaptation for ad-hoc Agent Coordination Poster
Jinming Ma, Yingfeng Chen, Feng Wu, Xianfeng Ji and Yu Ding Multimodal Reinforcement Learning with Effective State Representation Learning Poster
Kishan Chandan, Jack Albertson and Shiqi Zhang Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration Poster
Palash Dey Priced Gerrymandering Poster
Paul Tylkin, Tsun-Hsuan Wang, Tim Seyde, Kyle Palko, Ross Allen, Alexander Amini and Daniela Rus Autonomous Flight Arcade Challenge: Single- and Multi-Agent Learning Environments for Aerial Vehicles Poster
Sriram Gopalakrishnan and Subbarao Kambhampati Minimizing Robot Navigation Graph For Position-Based Predictability By Humans Poster
Ulrik Brandes, Christian Laußmann and Jörg Rothe Voting for Centrality Poster
Vasilis Livanos, Ruta Mehta and Aniket Murhekar (Almost) Envy-Free, Proportional and Efficient Allocations of an Indivisible Mixed Manna Poster
Vignesh Viswanathan, Megha Bose and Praveen Paruchuri Moving Target Defense under Uncertainty for Web Applications Poster
Wenhan Huang, Kai Li, Kun Shao, Tianze Zhou, Jun Luo, Dongge Wang, Hangyu Mao, Jianye Hao, Jun Wang and Xiaotie Deng Multiagent Q-learning with Sub-Team Coordination Poster
Xiaoyan Zhang, Graham Coates, Sarah Dunn and Jean Hall An Agent-based Model for Emergency Evacuation from a Multi-floor Building Poster
Ziyi Xu, Xue Cheng and Yangbo He Performance of Deep Reinforcement Learning for High Frequency Market Making on Actual Tick Data Poster

AAMAS DAY 2, SLOT PD2C, May 12th, 22.00-23.00 NZST (UTC +12)

Alison Roberto Panisson, Peter McBurney and Rafael H. Bordini Towards an Enthymeme-Based Communication Framework Poster
Anna Maria Kerkmann and Jörg Rothe Popularity and Strict Popularity in Altruistic Hedonic Games and Minimum-Based Altruistic Hedonic Games Poster
Annemarie Borg and Floris Bex Contrastive Explanations for Argumentation-Based Conclusions Poster
Athina Georgara, Juan Antonio Rodriguez Aguilar, Carles Sierra, Ornella Mich, Raman Kazhamiakin, Alessio Palmero Aprosio and Jean-Christophe Pazzaglia An anytime heuristic algorithm for allocating many teams to many tasks Poster
Aviram Aviv, Yaniv Oshrat, Samuel Assefa, Toby Mustapha, Daniel Borrajo, Manuela Veloso and Sarit Kraus Advising Agent for Service-Providing Live-Chat Operators Poster
Dimitrios Troullinos, Georgios Chalkiadakis, Vasilis Samoladas and Markos Papageorgiou Max-sum with Quadtrees for Continuous DCOPs with Application to Lane-Free Autonomous Driving Poster
Diogo Rato, Marta Couto and Rui Prada Behavior vs Appearance: what type of adaptations are more socially motivated? Poster
Dorothea Baumeister and Tobias Alexander Hogrebe On the Average-Case Complexity of Predicting Round-Robin Tournaments Poster
Felipe Garrido Lucero and Rida Laraki Stable Matching Games Poster
Francis Rhys Ward, Francesca Toni and Francesco Belardinelli On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios Poster
Henri Meess, Jeremias Gerner, Daniel Hein, Stefanie Schmidtner and Gordon Elger Reinforcement Learning for Traffic Signal Control Optimization: A Concept for Real-World Implementation Poster
Jad Bassil, Benoît Piranda, Abdallah Makhoul and Julien Bourgeois A New Porous Structure for Modular Robots Poster
Jennifer She, Jayesh Gupta and Mykel Kochenderfer Agent-Time Attention for Sparse Rewards Multi-Agent Reinforcement Learning Poster
Juncheng Dong, Suya Wu, Mohammadreza Soltani and Vahid Tarokh Multi-Agent Adversarial Attacks for Multi-Channel Communications Poster
Martino Bernasconi, Federico Cacciamani, Simone Fioravanti, Nicola Gatti and Francesco Trovò The Evolutionary Dynamics of Soft-Max Policy Gradient in Multi-Agent Settings Poster
Michał Zawalski, Błażej Osiński, Henryk Michalewski and Piotr Miłoś Off-Policy Correction For Multi-Agent Reinforcement Learning Poster
Miguel Suau, Jinke He, Matthijs Spaan and Frans Oliehoek Speeding up Deep Reinforcement Learning through Influence-Augmented Local Simulators Poster
Önder Gürcan Proof-of-Work as a Stigmergic Consensus Algorithm Poster
Pallavi Bagga, Nicola Paoletti and Kostas Stathis Deep Learnable Strategy Templates for Multi-Issue Bilateral Negotiation Poster
Panagiotis Kanellopoulos, Maria Kyropoulou and Hao Zhou Forgiving Debt in Financial Network Games Poster
Rafid Ameer Mahmud, Fahim Faisal, Saaduddin Mahmud and Md. Mosaddek Khan A Simulation Based Online Planning Algorithm for Multi-Agent Cooperative Environments Poster
Raphaël Avalos, Mathieu Reymond, Ann Nowé and Diederik M. Roijers Local Advantage Networks for Cooperative Multi-Agent Reinforcement Learning Poster
Samhita Kanaparthy, Sankarshan Damle and Sujit Gujar REFORM: Reputation Based Fair and Temporal Reward Framework for Crowdsourcing Poster
Seyed Esmaeili, Sharmila Duppala, Vedant Nanda, Aravind Srinivasan and John Dickerson Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and Individual Poster
Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer and Nihar Shah Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design Poster
Yohai Trabelsi, Abhijin Adiga, Sarit Kraus and S.S. Ravi Maximizing Resource Allocation Likelihood with Minimum Compromise Poster

AAMAS DAY 3, SLOT PD3B, May 13th, 14.00-15.00 NZST (UTC +12)

Arnab Maiti and Palash Dey Parameterized Algorithms for Kidney Exchange Poster
Darshan Chakrabarti, Jie Gao, Aditya Saraf, Grant Schoenebeck and Fang-Yi Yu Optimal Local Bayesian Differential Privacy over Markov Chains Poster
Diyi Hu, Chi Zhang, Viktor Prasanna and Bhaskar Krishnamachari Intelligent Communication over Realistic Wireless Networks in Multi-Agent Cooperative Games Poster
Enwei Guo, Xiumin Wang and Weiwei Wu Adaptive Aggregation Weight Assignment for Federated Learning: A Deep Reinforcement Learning Approach Poster
Erik Wijmans, Irfan Essa and Dhruv Batra How to Train PointGoal Navigation Agents on a (Sample and Compute) Budget Poster
Everardo Gonzalez, Lucie Houel, Radhika Nagpal and Melinda Malley Influencing Emergent Self-Assembled Structures in Robotic Collectives Through Traffic Control Poster
Flavia Barsotti, Rüya Gökhan Koçer and Fernando P. Santos Can Algorithms be Explained Without Compromising Efficiency? The Benefits of Detection and Imitation in Strategic Classification Poster
Guan-Ting Liu, Guan-Yu Lin and Pu-Jen Cheng Improving Generalization with Cross-State Behavior Matching in Deep Reinforcement Learning Poster
Jennifer Leaf and Julie Adams Measuring Resilience in Collective Robotic Algorithms Poster
Jiayu Chen, Jingdi Chen, Tian Lan and Vaneet Aggarwal Multi-agent Covering Option Discovery through Kronecker Product of Factor Graphs Poster
Junsong Gao, Ziyu Chen, Dingding Chen and Wenxin Zhang Beyond Uninformed Search: Improving Branch-and-bound Based Acceleration Algorithms for Belief Propagation via Heuristic Strategies Poster
Justin Payan and Yair Zick I Will Have Order! Optimizing Orders for Fair Reviewer Assignment Poster
Kazi Ashik Islam, Madhav Marathe, Henning Mortveit, Samarth Swarup and Anil Vullikanti Data-driven Agent-based Models for Optimal Evacuation of Large Metropolitan Areas for Improved Disaster Planning Poster
Masanori Hirano, Kiyoshi Izumi and Hiroki Sakaji Implementation of Actual Data for Artificial Market Simulation Poster
Pinkesh Badjatiya, Mausoom Sarkar, Nikaash Puri, Jayakumar Subramanian, Abhishek Sinha, Siddharth Singh and Balaji Krishnamurthy Status-quo policy gradient in Multi-Agent Reinforcement Learning Poster
Ravi Vythilingam, Deborah Richards and Paul Formosa The Ethical Acceptability of Artificial Social Agents Poster
Samuel Arseneault, David Vielfaure and Giovanni Beltrame RASS: Risk-Aware Swarm Storage Poster
Shang Wang, Mathieu Reymond, Athirai Irissappane and Diederik M. Roijers Near On-Policy Experience Sampling in Multi-Objective Reinforcement Learning Poster
Shivika Narang, Arpita Biswas and Y Narahari On Achieving Leximin Fairness and Stability in Many-to-One Matchings Poster
Tesshu Hanaka, Toshiyuki Hirose and Hirotaka Ono Capacitated Network Design Games on a Generalized Fair Allocation Model Poster
Wilkins Leong, Julie Porteous and John Thangarajah Automated Story Sifting Using Story Arcs Poster
Will Ma, Pan Xu and Yifan Xu Group-level Fairness Maximization in Online Bipartite Matching Poster
Yue Jin, Shuangqing Wei, Jian Yuan and Xudong Zhang Learning to Advise and Learning from Advice in Cooperative Multiagent Reinforcement Learning Poster