Program Overview
The schedule and the links to the virtual rooms are provided at
https://underline.io/events/288/schedule?day=2022-05-10T22%3A00%3A00.000Z&trackId=1488
Time Slots
SLOT A 3.00*-7.00* Auckland 23.00-3.00* Beijing 20.30-0.30* Kolkata 17.00-21.00 Paris 11.00-15.00 New York |
SLOT B 11.00-15.00 Auckland 7.00-11.00 Beijing 4.30-8.30 Kolkata 1.00-5.00 Paris 19.00-23.00** New York |
SLOT C 19.00-23.00 Auckland 15.00-19.00 Beijing 12.30-16.30 Kolkata 9.00-13.00 Paris 3.00-7.00 New York |
* means “next day w.r.t. the other dates”
** means “previous day w.r.t. the other dates”
Program Schedule (Dates and times are given in Auckland time zone)
AAMAS DAY 1 |
||||||||
SLOT C | https://www.timeanddate.com/worldclock/converter.html?iso=20220511T070000&p1=179&p2=195&p3=54&p4=33&p5=22 | |||||||
18.40 – 19.00 | May 11th, 18.40 – 19.00 | Plenary session: 1st Opening | ||||||
19.00 – 20.00 | May 11th, 19.00 – 20.00 | Plenary session: Invited talk by Joanna Seibt: “The Aims of Social Robotics” | ||||||
20.00 – 21.00 | May 11th, 20.00 – 21.00 | 1C1-1 | 1C2-1 | 1C3-1 | 1C4-1 | 1C5-1 | 1C6-1 | |
21.00 – 22.00 | May 11th, 21.00 – 22.00 | 1C1-2 | 1C2-2 | 1C3-2 | 1C4-2 | 1C5-2 | 1C6-2 | |
22.00 – 23.00 | May 11th, 22.00 – 23.00 | Poster & Demo Session PD1C | ||||||
SLOT A | https://www.timeanddate.com/worldclock/converter.html?iso=20220511T150000&p1=179&p2=195&p3=54&p4=33&p5=22 | |||||||
2.40-3.00 | May 12th, 2.40-3.00 | Plenary session: 2nd Opening | ||||||
3.00-4.00 | May 12th, 3.00-4.00 | 1A1-1 | 1A2-1 | 1A3-1 | 1A4-1 | 1A5-1 | 1A6-1 | [D&I Activity 1] Workshop on Artificial Intelligence – Diversity, Belonging, Equity, and Inclusion (AIDBEI) |
4.00-5.00 | May 12th, 4.00-5.00 | 1A1-2 | 1A2-2 | 1A3-2 | 1A4-2 | 1A5-2 | 1A6-2 | |
5.00-6.00 | May 12th, 5.00-6.00 | 1A1-3 | 1A2-3 | 1A3-3 | 1A4-3 | 1A5-3 | 1A6-3 | |
6.00-7.00 | May 12th, 6.00-7.00 | Plenary session: Invited talk by Shafi Goldwasser: “Safe ML: Robustness, Verification and Privacy” |
|
||||||||
SLOT B | https://www.timeanddate.com/worldclock/converter.html?iso=20220511T230000&p1=179&p2=195&p3=54&p4=33&p5=22 | |||||||
10.40-11.00 | May 12th, 10.40-11.00 | Plenary session: 3rd Opening | ||||||
11.00-12.00 | May 12th, 11.00-12.00 | Plenary session: Invited talk by Mark Sagar: “Autonomous Animation” | ||||||
12.00-13.00 | May 12th, 12.00-13.00 | 2B1-1 | 2B2-1 | 2B3-1 | 2B4-1 | 2B5-1 | [D&I Activity 2] Queer in AI Social | |
13.00-14.00 | May 12th, 13.00-14.00 | 2B1-2 | 2B2-2 | 2B3-2 | 2B4-2 | |||
14.00-15.00 | May 12th, 14.00-15.00 | Poster & Demo Session PD2B | ||||||
SLOT C | https://www.timeanddate.com/worldclock/converter.html?iso=20220512T070000&p1=179&p2=195&p3=54&p4=33&p5=22 | |||||||
18.40 – 19.00 | ||||||||
19.00 – 20.00 | May 12th, 19.00 – 20.00 | 2C1-1 | 2C2-1 | 2C3-1 | 2C4-1 | 2C5-1 | ||
20.00 – 21.00 | May 12th, 20.00 – 21.00 | 2C1-2 | 2C2-2 | 2C3-2 | 2C4-2 | 2C5-2 | ||
21.00 – 22.00 | May 12th, 21.00 – 22.00 | 2C1-3 | 2C2-3 | 2C3-3 | 2C4-3 | 2C5-3 | ||
22.00 – 23.00 | May 12th, 22.00 – 23.00 | Poster & Demo Session PD2C | ||||||
SLOT A | https://www.timeanddate.com/worldclock/converter.html?iso=20220512T150000&p1=179&p2=195&p3=54&p4=33&p5=22 | |||||||
2.40-3.00 | ||||||||
3.00-4.00 | May 13th, 3.00-4.00 | Plenary session: Invited talk by Maria Gini: “Decentralized allocation of tasks to agents and robots” | ||||||
4.00-5.00 | May 13th, 4.00-5.00 | 2A1-1 | 2A2-1 | 2A3-1 | 2A4-1 | 2A5-1 | 2A6-1 | 2A7-1 |
5.00-6.00 | May 13th, 5.00-6.00 | 2A1-2 | 2A2-2 | 2A3-2 | 2A4-2 | 2A5-2 | 2A6-2 | 2A7-2 |
6.00-7.00 | May 13th, 6.00-7.00 | 2A1-3 | 2A2-3 | 2A3-3 | 2A4-3 | 2A5-3 | 2A6-3 |
AAMAS DAY 3 |
|||||||||
SLOT B | https://www.timeanddate.com/worldclock/converter.html?iso=20220512T230000&p1=179&p2=195&p3=54&p4=33&p5=22 | ||||||||
10.40-11.00 | |||||||||
11.00-12.00 | May 13th, 11.00-12.00 | Plenary session: Invited talk by Bryan Wilder: “AI for Population Health: Melding Data and Algorithms on Networks” | |||||||
12.00-13.00 | May 13th, 12.00-13.00 | 3B1-1 | 3B2-1 | 3B3-1 | 3B4-1 | 3B5-1 | 3B6-1 | ||
13.00-14.00 | May 13th, 13.00-14.00 | 3B1-2 | 3B2-2 | 3B3-2 | 3B4-2 | 3B5-2 | 3B6-2 | ||
14.00-15.00 | May 13th, 14.00-15.00 | Poster & Demo Session PD3B | |||||||
SLOT C | https://www.timeanddate.com/worldclock/converter.html?iso=20220513T070000&p1=179&p2=195&p3=54&p4=33&p5=22 | ||||||||
18.40 – 19.00 | |||||||||
19.00 – 20.00 | May 13th, 19.00 – 20.00 | 3C1-1 | 3C2-1 | 3C3-1 | 3C4-1 | 3C5-1 | |||
20.00 – 21.00 | May 13th, 20.00 – 21.00 | 3C1-2 | 3C2-2 | 3C3-2 | 3C4-2 | 3C5-2 | 3C6-2 | ||
21.00 – 22.00 | May 13th, 21.00 – 22.00 | Plenary session: closing, followed by business meeting | |||||||
22.00 – 23.00 |
Paper Sessions
Schedule of the AAMAS main track full papers, Blue Sky papers, JAAMAS papers
Session | Session Chair | Papers |
AAMAS DAY 1, SLOT C, May 11th, 20.00–22.00 NZST (UTC +12) |
||
1C1-1 | Jan Maly |
SC&CGT: Equilibria in Schelling Games: Computational Hardness and Robustness SC&CGT: Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous Preferences SC&CGT: Position-based Matching with Multi-Modal Preferences SC&CGT: Coalition Formation Games and Social Ranking Solutions |
1C1-2 | Katsuhide Fujita |
MA&NCGT: Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games MA&NCGT: Corruption in Auctions: Social Welfare Loss in Hybrid Multi-Unit Auctions MA&NCGT: Automated Configuration and Usage of Strategy Portfolios for Bargaining MA&NCGT: Incentives to Invite Others to Form Larger Coalitions |
1C2-1 | Tibor Bosse |
JAAMAS, HUM: Trust repair in human-agent teams: the effectiveness ofexplanations and expressing regret HUM: Explainability in Multi-Agent Path/Motion Planning: User-study-driven taxonomy and requirements HUM: Building contrastive explanations for multi-agent team formation HUM: Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards Individualized and Explainable Robotic Support in Everyday Activities |
1C2-2 | Zehong (Jimmy) Cao |
HUM: Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions HUM: CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning Agents HUM: Sympathy based Reinforcement Learning agents |
1C3-1 | Emmanouil Rigas |
APP: Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit Portions BLUE SKY, APP: “Go to the Children”: Rethinking to Intelligent Agent Design and Programming in a Developmental Learning Perspective APP: Deep Reinforcement Learning for Active Wake Control BLUE SKY, APP: Agent-Assisted Life-Long Education and Learning |
1C3-2 | Angelo Ferrando |
ROBO: Coordinated Multi-Agent Path Finding for Drones and Trucks over Road Networks ROBO: Context-Aware Modelling for Multi-Robot Systems Under Uncertainty ROBO: Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation ROBO: Refined Hardness of Distance-Optimal Multi-Agent Path Finding |
1C4-1 | Minming Li |
JAAMAS, KRRP, MA&NCGT: GDL as a Unifying Domain Description Language for Declarative Automated Negotiation (Extended Abstract) JAAMAS, MA&NCGT: Combining quantitative and qualitative reasoning in concurrent multi-player games JAAMAS, KRRP, MA&NCGT: Concurrent Negotiations with Global Utility Functions MA&NCGT: Anti-Malware Sandbox Games |
1C4-2 | Michael Winikoff |
APP: Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality APP: Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic Management APP: Hierarchical Value Decomposition for Effective On-demand Ride-Pooling |
1C5-1 | Natalia Criado |
COIN: GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning COIN: Learning Efficient Diverse Communication for Cooperative Heterogeneous Teaming KRRP, MA&NCGT: Reasoning about Human-Friendly Strategies in Repeated Keyword Auctions COIN: A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large Settings |
1C5-2 | Badran Raddaoui |
BLUE SKY, COIN: Macro Ethics for Governing Equitable Sociotechnical Systems COIN: Ensemble and Incremental Learning for Norm Violation Detection KRRP: Online Collective Multiagent Planning by Offline Policy Reuse with Applications to City-Scale Mobility-on-Demand Systems |
1C6-1 | Xenofon Evangelopoulos |
LEARN: Individual-Level Inverse Reinforcement Learning for Mean Field Games LEARN: REMAX: Relational Representation for Multi-Agent Exploration LEARN: Learning to Transfer Role Assignment Across Team Sizes LEARN: Scalable Multi-Agent Model-Based Reinforcement Learning |
1C6-2 | Emma Norling |
EMAS: ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative Charging HUM: Multimodal analysis of the predictability of hand-gesture properties HUM: Coaching agent: making recommendations for behavior change. A case study on improving eating habits |
AAMAS DAY 1, SLOT A, May 12th, 3.00-6.00 NZST (UTC +12) |
||
1A1-1 | Martin Bullinger |
SC&CGT: Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based Quotas SC&CGT: Three-Dimensional Popular Matching with Cyclic Preferences SC&CGT: Fair Stable Matching Meets Correlated Preferences SC&CGT: Computing Balanced Solutions for Large International Kidney Exchange Schemes |
1A1-2 | Noam Hazon |
SC&CGT: Multiagent Model-based Credit Assignment for Continuous Control SC&CGT: Efficient Approximation Algorithms for the Inverse Semivalue Problem SC&CGT: Pareto optimal and popular house allocation with lower and upper quotas JAAMAS, SC&CGT: Towards addressing dynamic multi-agent task allocation in law enforcement |
1A1-3 | Nisarg Shah |
SC&CGT: A Graph-Based Algorithm for the Automated Justification of Collective Decisions SC&CGT: Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population Distribution SC&CGT: How to Fairly Allocate Easy and Difficult Chores |
1A2-1 | Georgios Anagnostopoulos |
LEARN: Scalable Multi-Agent Model-Based Reinforcement Learning LEARN: Pareto Conditioned Networks LEARN: Unbiased Asymmetric Reinforcement Learning under Partial Observability LEARN: Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement Learning |
1A2-2 | Felipe Leno da Silva |
LEARN: MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning LEARN: BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs LEARN: Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning LEARN: Multi-Objective Reinforcement Learning with Non-Linear Scalarization |
1A2-3 | Diodato Ferraioli |
MA&NCGT: Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent MA&NCGT: Robust No-Regret Learning in Min-Max Stackelberg Games MA&NCGT: A path-following polynomial equations systems approach for computing Nash equilibria MA&NCGT: One-Sided Matching Markets with Endowments: Equilibria and Algorithms |
1A3-1 | Roberta Calegari |
KRRP: Betweenness Centrality in Multi-Agent Path Finding KRRP: Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph Pruning KRRP: CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces KRRP: Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal Planning |
1A3-2 | Sasha Rubin |
KRRP: Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss KRRP: Graphical Representation Enhances Human Compliance with Principles for Graded Argumentation Semantics KRRP: Multiagent Dynamics of Gradual Argumentation Semantics KRRP: Socially supervised representation learning: the role of subjectivity in learning efficient representations |
1A3-3 | Rishi Hazra |
KRRP: Epistemic Reasoning in Jason KRRP: Logical Theories of Collective Attitudes and the Belief Base Perspective KRRP: BOID*: Autonomous goal deliberation through abduction KRRP: Preference-Based Goal Refinement in BDI Agents |
1A4-1 | Chris Amato |
LEARN: Off-Policy Evolutionary Reinforcement Learning with Maximum Mutations LEARN: Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning LEARN: Poincaré-Bendixson Limit Sets in Multi-Agent Learning LEARN: Characterizing Attacks on Deep Reinforcement Learning |
1A4-2 | Jorge Gómez |
SIM: Asynchronous Opinion Dynamics in Social Networks SIM: Cascades and Overexposure in Social Networks: The Budgeted Case SIM, SC&CGT: Cooperation and learning dynamics under risk diversity and financial incentives SIM: Segregation in social networks of heterogeneous agents acting under incomplete information |
1A4-3 | Nirav Ajmeri |
SIM: Knowledge Transmission and Improvement Across Generations do not Need Strong Selection SIM: Properties of Reputation Lag Attack Strategies COIN, SIM: Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against It SIM: Agent-based modeling and simulation for malware spreading in D2D networks |
1A5-1 | Ayan Mukhopadhyay |
LEARN: Emergent Cooperation from Mutual Acknowledgment Exchange APP, SC&CGT, MA&NCGT: Revenue and User Traffic Maximization in Mobile Short-Video Advertising APP, MA&NCGT: Networked Restless Multi-Armed Bandits for Mobile Interventions KRRP, SC&CGT: Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems |
1A5-2 | Hadi Hosseini |
SC&CGT: Ordinal Maximin Share Approximation for Chores MA&NCGT: Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting MA&NCGT: Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality Games JAAMAS, KRRP, MA&NCGT: GDL as a Unifying Domain Description Language for Declarative Automated Negotiation (Extended Abstract) |
1A5-3 | Pieter Libin |
LEARN: How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents LEARN: The Dynamics of Q-learning in Population Games: a Physics-inspired Continuity Equation Model LEARN: A Mean Field Game Model of Spatial Evolutionary Games LEARN: Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration |
1A6-1 | Neil Yorke-Smith |
EMAS: Pippi: Practical Protocol Instantiation HUM: Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents ROBO: State Supervised Steering Function for Sampling-based Kinodynamic Planning APP: Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic Management |
1A6-2 | Athirai A. Irissappane |
LEARN: Multi-Agent Curricula and Emergent Implicit Signaling LEARN: Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSRO LEARN: Lyapunov Exponents for Diversity in Differentiable Games LEARN: ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments |
1A6-3 | Jivko Sinapov |
LEARN: Scaling Mean Field Games with Online Mirror Descent ROBO: Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning LEARN: Any-Play: an Intrinsic Augmentation for Zero-Shot Coordination BLUE SKY, LEARN: Towards Anomaly Detection in Reinforcement Learning |
AAMAS DAY 2, SLOT B, May 12th, 12.00-14.00 NZST (UTC +12) |
||
2B1-1 | Alan Tsang |
SC&CGT: Ordinal Maximin Share Approximation for Chores SC&CGT: Beyond Cake Cutting: Allocating Homogeneous Divisible Goods SC&CGT: Fair and Truthful Mechanism with Limited Subsidy SC&CGT: Multivariate Algorithmics for Eliminating Envy by Donating Goods |
2B1-2 | Makoto Yokoo |
BLUE SKY, SC&CGT: Social Choice Around the Block: On the Computational Social Choice of Blockchain SC&CGT: How Hard is Bribery in Elections with Randomly Selected Voters BLUE SKY, SC&CGT: Augmented Democratic Deliberation: Can Conversational Agents Boost Deliberation in Social Media? SC&CGT: Facility Location With Approval Preferences: Strategyproofness and Fairness |
2B2-1 | Vaibhav Unhelkar |
ROBO: State Supervised Steering Function for Sampling-based Kinodynamic Planning ROBO: Intention-Aware Navigation in Crowds with Extended-Space POMDP Planning ROBO: A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled Environments ROBO: Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement Learning |
2B2-2 | Dengji Zhao |
MA&NCGT: Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic Assignment MA&NCGT: The Generalized Magician Problem under Unknown Distributions and Related Applications MA&NCGT: Strategy-Proof House Allocation with Existing Tenants over Social Networks JAAMAS, MA&NCGT: Designing Efficient and Fair Mechanisms for Multi-Type Resource Allocation |
2B3-1 | Rafael Bordini |
EMAS: Pippi: Practical Protocol Instantiation EMAS: FCMNet: Full Communication Memory Net for Team-Level Cooperation in Multi-Agent Systems EMAS: ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative Charging EMAS: Testing requirements via User and System Stories in Agent Systems |
2B3-2 | Karthik Abinav Sankararaman |
LEARN: A Mean Field Game Model of Spatial Evolutionary Games LEARN: Multi-Agent Curricula and Emergent Implicit Signaling LEARN: Scaling Mean Field Games with Online Mirror Descent LEARN: Individual-Level Inverse Reinforcement Learning for Mean Field Games |
2B4-1 | Samarth Swarup |
COIN, KRRP: Quantitative Group Trust: A Two-Stage Verification Approach KRRP: Controller Synthesis for Omega-Regular and Steady-State Specifications KRRP: Multi-Agent Path Finding for Precedence-Constrained Goal Sequences JAAMAS, COIN, KRRP: Enabling BDI group plans with coordination middleware |
2B4-2 | Toshiharu Sugawara |
SIM: Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic Response SIM: Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map Data SIM: Residual Entropy-based Graph Generative Algorithms JAAMAS, SIM, EMAS: Automatic calibration framework of agent-based models for dynamic and heterogeneous parameters |
2B5-1 | Adrian Haret |
KRRP: Betweenness Centrality in Multi-Agent Path Finding KRRP: Planning Not to Talk: Multiagent Systems that are Robust to Communication Loss KRRP: Epistemic Reasoning in Jason JAAMAS, KRRP, MA&NCGT: Concurrent Negotiations with Global Utility Functions |
AAMAS DAY 2, SLOT C, May 12th, 19.00-22.00 NZST (UTC +12) |
||
2C1-1 | Long Tran-Thanh |
MA&NCGT: Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type Reporting MA&NCGT: On Parameterized Complexity of Binary Networked Public Goods Game MA&NCGT: The spoofing resistance of frequent call markets MA&NCGT: The Competition and Inefficiency in Urban Road Last-Mile Delivery |
2C1-2 | Pallavi Jain |
SC&CGT: Computation and Bribery of Voting Power in Delegative Simple Games SC&CGT: Equilibrium Computation For Knockout Tournaments Played By Groups SC&CGT: How Hard is Bribery in Elections with Randomly Selected Voters SC&CGT: Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based Quotas |
2C1-3 | Paul Harrenstein |
SC&CGT: Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes SC&CGT: The Price of Majority Support SC&CGT: A Graph-Based Algorithm for the Automated Justification of Collective Decisions SC&CGT: Simulating Multiwinner Voting Rules in Judgment Aggregation |
2C2-1 | Elizabeth Sklar |
ROBO: Using Deep Learning to Bootstrap Abstractions for Hierarchical Robot Planning ROBO: Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation BLUE SKY, ROBO: The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts ROBO: Standby-Based Deadlock Avoidance Method for Multi-Agent Pickup and Delivery Tasks |
2C2-2 | Fernando P. Santos |
LEARN: Best-Response Bayesian Reinforcement Learning with BA-POMDPs for Centaurs LEARN: Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive Reasoning JAAMAS, LEARN: Goal-driven Active Reinforcement Learning with Human Teachers |
2C2-3 | Chongjie Zhang |
LEARN: How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents LEARN: Lazy-MDPs: Towards Interpretable RL by Learning When to Act LEARN: Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning LEARN: Robust Learning from Observation with Model Misspecification |
2C3-1 | Paolo Turrini |
SC&CGT: How Hard is Safe Bribery? SC&CGT: Multiagent Model-based Credit Assignment for Continuous Control BLUE SKY, SC&CGT: Augmented Democratic Deliberation: Can Conversational Agents Boost Deliberation in Social Media? JAAMAS, SC&CGT: Reaching Consensus Under a Deadline |
2C3-2 | Jan Maly |
KRRP: Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph Pruning KRRP: Graphical Representation Enhances Human Compliance with Principles for Graded Argumentation Semantics KRRP: Online Collective Multiagent Planning by Offline Policy Reuse with Applications to City-Scale Mobility-on-Demand Systems KRRP: Logical Theories of Collective Attitudes and the Belief Base Perspective |
2C3-3 | Alberto Castellini |
KRRP: CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces KRRP, LEARN: Learning Heuristics for Combinatorial Assignment by Optimally Solving Subproblems KRRP: BOID*: Autonomous goal deliberation through abduction KRRP: Multiagent Dynamics of Gradual Argumentation Semantics |
2C4-1 | Laurent Perrussel |
KRRP: Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal Planning KRRP: Socially supervised representation learning: the role of subjectivity in learning efficient representations KRRP: Negotiated Path Planning for Non-Cooperative Multi-Robot Systems KRRP: Preference-Based Goal Refinement in BDI Agents |
2C4-2 | Ann Nowe |
HUM: Warmth and Competence in Human-Agent Cooperation APP: Deep Reinforcement Learning for Active Wake Control LEARN: Spiking Pitch Black: Poisoning an Unknown Environment to Attack Unknown Reinforcement Learners |
2C5-1 | Alessandro Ricci |
LEARN: SIDE: State Inference for Partially Observable Cooperative Multi-Agent Reinforcement Learning LEARN: Budgeted Combinatorial Multi-Armed Bandits LEARN: Pareto Conditioned Networks LEARN: MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning |
2C5-2 | Neil Yorke-Smith |
SIM: Knowledge Transmission and Improvement Across Generations do not Need Strong Selection SIM: Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map Data SIM: Asynchronous Opinion Dynamics in Social Networks HUM: Long-Term Resource Allocation Fairness in Average Markov Decision Process (AMDP) Environment |
2C5-3 | Emma Norling |
LEARN: Emergent Cooperation from Mutual Acknowledgment Exchange HUM: COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer Patient SIM: Residual Entropy-based Graph Generative Algorithms ROBO: Refined Hardness of Distance-Optimal Multi-Agent Path Finding |
AAMAS DAY 2, SLOT A, May 13th, 4.00-7.00 NZST (UTC +12) |
||
2A1-1 | Arpita Biswas |
SC&CGT: Computation and Bribery of Voting Power in Delegative Simple Games SC&CGT: The Price of Majority Support JAAMAS, SC&CGT: Reaching Consensus Under a Deadline SC&CGT: Tracking Truth by Weighting Proxies in Liquid Democracy |
2A1-2 | Alan Tsang |
BLUE SKY, SC&CGT: Foundations for the Grassroots Formation of a Democratic Metaverse SC&CGT: Simulating Multiwinner Voting Rules in Judgment Aggregation SC&CGT: Welfare vs. Representation in Participatory Budgeting SC&CGT: Selecting PhD Students and Projects with Limited Funding |
2A1-3 | Nisarg Shah |
SC&CGT: Equilibrium Computation For Knockout Tournaments Played By Groups SC&CGT: Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision Schemes SC&CGT: How Hard is Safe Bribery? SC&CGT: Computing Nash Equilibria for District-based Nominations |
2A2-1 | Sandip Sen |
HUM: Evaluating the Role of Interactivity on Improving Transparency in Autonomous Agents HUM: Multimodal analysis of the predictability of hand-gesture properties HUM: Factorial Agent Markov Model: Modeling Other Agents’ Behavior in presence of Dynamic Latent Decision Factors HUM: Towards Pluralistic Value Alignment: Aggregating Value Systems Through Lp-Regression |
2A2-2 | Duncan McElfresh |
HUM: Long-Term Resource Allocation Fairness in Average Markov Decision Process (AMDP) Environment BLUE SKY, ROBO: Robots Teaching Humans: A New Communication Paradigm via Reverse Teleoperation HUM: Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming HUM: Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction |
2A2-3 | Beatrice Biancardi |
HUM: Coaching agent: making recommendations for behavior change. A case study on improving eating habits HUM: Warmth and Competence in Human-Agent Cooperation HUM: COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer Patient HUM: Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning |
2A3-1 | Jivko Sinapov |
LEARN: Any-Play: an Intrinsic Augmentation for Zero-Shot Coordination LEARN: Budgeted Combinatorial Multi-Armed Bandits LEARN: Evaluating Strategy Exploration in Empirical Game-Theoretic Analysis LEARN: D3C: Reducing the Price of Anarchy in Multi-Agent Learning |
2A3-2 | Martin Hoefer |
MA&NCGT: On Parameterized Complexity of Binary Networked Public Goods Game MA&NCGT: Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic Assignment MA&NCGT: Corruption in Auctions: Social Welfare Loss in Hybrid Multi-Unit Auctions BLUE SKY, SC&CGT: Foundations for the Grassroots Formation of a Democratic Metaverse |
2A3-3 | Piotr Faliszewski |
MA&NCGT: The Generalized Magician Problem under Unknown Distributions and Related Applications MA&NCGT: Automated Configuration and Usage of Strategy Portfolios for Bargaining MA&NCGT: The spoofing resistance of frequent call markets JAAMAS, MA&NCGT: Combining quantitative and qualitative reasoning in concurrent multi-player games |
2A4-1 | Paul Harrenstein |
SC&CGT: Equilibria in Schelling Games: Computational Hardness and Robustness COIN, KRRP: Quantitative Group Trust: A Two-Stage Verification Approach HUM: Building contrastive explanations for multi-agent team formation |
2A4-2 | Hadi Hosseini |
SC&CGT: Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous Preferences MA&NCGT: The Competition and Inefficiency in Urban Road Last-Mile Delivery SC&CGT: Beyond Cake Cutting: Allocating Homogeneous Divisible Goods BLUE SKY, SC&CGT: Social Choice Around the Block: On the Computational Social Choice of Blockchain |
2A4-3 | Amy Greenwald |
LEARN: Best-Response Bayesian Reinforcement Learning with BA-POMDPs for Centaurs LEARN: Lazy-MDPs: Towards Interpretable RL by Learning When to Act LEARN: Centralized Model and Exploration Policy for Multi-Agent RL |
2A5-1 | Georgios Anagnostopoulos |
KRRP, LEARN: Learning Heuristics for Combinatorial Assignment by Optimally Solving Subproblems KRRP: Negotiated Path Planning for Non-Cooperative Multi-Robot Systems KRRP: A Symbolic Representation for Probabilistic Dynamic Epistemic Logic KRRP: A Declarative Framework for Maximal k-plex Enumeration Problems |
2A5-2 | Alberto Castellini |
LEARN: Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning LEARN: A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms LEARN: Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive Reasoning LEARN: Concave Utility Reinforcement Learning: the Mean-field Game viewpoint |
2A5-3 | Maria Gini |
ROBO: Coordinated Multi-Agent Path Finding for Drones and Trucks over Road Networks ROBO: Tactile Pose Estimation and Policy Learning for Unknown Object Manipulation ROBO: Intention-Aware Navigation in Crowds with Extended-Space POMDP Planning |
2A6-1 | Sven Koenig |
ROBO: Context-Aware Modelling for Multi-Robot Systems Under Uncertainty ROBO: A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled Environments BLUE SKY, ROBO: The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal Experts |
2A6-2 | Viviana Mascardi |
JAAMAS, SC&CGT: Voting with Random Classifiers in Ensembles (VORACE) APP: Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality COIN: GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement Learning |
2A6-3 | Jaime Sichman |
HUM: Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward Functions BLUE SKY, COIN: Macro Ethics for Governing Equitable Sociotechnical Systems BLUE SKY, APP: “Go to the Children”: Rethinking to Intelligent Agent Design and Programming in a Developmental Learning Perspective JAAMAS, HUM: Trust repair in human-agent teams: the effectiveness ofexplanations and expressing regret |
2A7-1 | Joseph Giampapa |
COIN: Ensemble and Incremental Learning for Norm Violation Detection HUM: CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning Agents ROBO: Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task Simulation COIN: Learning Efficient Diverse Communication for Cooperative Heterogeneous Teaming |
2A7-2 | Nirav Ajmeri |
SIM: Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic Response HUM: Explainability in Multi-Agent Path/Motion Planning: User-study-driven taxonomy and requirements KRRP: Controller Synthesis for Omega-Regular and Steady-State Specifications COIN: A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large Settings |
AAMAS DAY 3, SLOT B, May 13th, 12.00-14.00 NZST (UTC +12) |
||
3B1-1 | Yasser Mohammad |
LEARN: REMAX: Relational Representation for Multi-Agent Exploration LEARN: Centralized Model and Exploration Policy for Multi-Agent RL LEARN: Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement Learning LEARN: Learning Theory of Mind via Dynamic Traits Attribution |
3B1-2 | Guangliang Li |
LEARN: Off-Policy Evolutionary Reinforcement Learning with Maximum Mutations LEARN: Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning LEARN: ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments LEARN: A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms |
3B2-1 | Takayuki Ito |
APP, SC&CGT, MA&NCGT: Revenue and User Traffic Maximization in Mobile Short-Video Advertising MA&NCGT: Sample-based Approximation of Nash in Large Many-Player Games via Gradient Descent KRRP, MA&NCGT: Reasoning about Human-Friendly Strategies in Repeated Keyword Auctions MA&NCGT: Anti-Malware Sandbox Games |
3B2-2 | Paul Scott |
MA&NCGT: Incentives to Invite Others to Form Larger Coalitions APP, MA&NCGT: Networked Restless Multi-Armed Bandits for Mobile Interventions ROBO: Standby-Based Deadlock Avoidance Method for Multi-Agent Pickup and Delivery Tasks MA&NCGT: Robust No-Regret Learning in Min-Max Stackelberg Games |
3B3-1 | Minming Li |
HUM: Group fairness in bandit arm selection SC&CGT: Fair Stable Matching Meets Correlated Preferences KRRP, SC&CGT: Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems |
3B3-2 | Karthik Abinav Sankararaman |
LEARN: Evaluating Strategy Exploration in Empirical Game-Theoretic Analysis LEARN: Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning LEARN: Unbiased Asymmetric Reinforcement Learning under Partial Observability LEARN: BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs |
3B4-1 | Guangliang Li |
LEARN: Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments SIM: Cascades and Overexposure in Social Networks: The Budgeted Case BLUE SKY, APP: Agent-Assisted Life-Long Education and Learning |
3B4-2 | Duncan McElfresh |
HUM: Sympathy based Reinforcement Learning agents HUM: Factorial Agent Markov Model: Modeling Other Agents’ Behavior in presence of Dynamic Latent Decision Factors APP: Hierarchical Value Decomposition for Effective On-demand Ride-Pooling |
3B5-1 | Adrian Haret |
SC&CGT: Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population Distribution SC&CGT: Position-based Matching with Multi-Modal Preferences SC&CGT: Efficient Approximation Algorithms for the Inverse Semivalue Problem |
3B5-2 | Yasser Mohammad |
LEARN: D3C: Reducing the Price of Anarchy in Multi-Agent Learning LEARN: Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning BLUE SKY, ROBO: Robots Teaching Humans: A New Communication Paradigm via Reverse Teleoperation JAAMAS, LEARN: Goal-driven Active Reinforcement Learning with Human Teachers |
3B6-1 | Mohammad Hasan |
COIN, SIM: Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against It HUM: Be Considerate: Avoiding Negative Side Effects in Reinforcement Learning HUM: Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming LEARN: Multi-Objective Reinforcement Learning with Non-Linear Scalarization |
3B6-2 | Mark Reynolds |
LEARN: Characterizing Attacks on Deep Reinforcement Learning LEARN: Lyapunov Exponents for Diversity in Differentiable Games LEARN: Spiking Pitch Black: Poisoning an Unknown Environment to Attack Unknown Reinforcement Learners LEARN: Anomaly Guided Policy Learning from Imperfect Demonstrations |
AAMAS DAY 3, SLOT C, May 13th, 19.00-21.00 NZST (UTC +12) |
||
3C1-1 | Pieter Libin |
LEARN: Poincaré-Bendixson Limit Sets in Multi-Agent Learning LEARN: The Dynamics of Q-learning in Population Games: a Physics-inspired Continuity Equation Model LEARN: Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSRO LEARN: Concave Utility Reinforcement Learning: the Mean-field Game viewpoint |
3C1-2 | Chongjie Zhang |
BLUE SKY, LEARN: Towards Anomaly Detection in Reinforcement Learning LEARN: Learning to Transfer Role Assignment Across Team Sizes LEARN: Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration LEARN: SIDE: State Inference for Partially Observable Cooperative Multi-Agent Reinforcement Learning |
3C2-1 | Diodato Ferraioli |
SC&CGT: Fair and Truthful Mechanism with Limited Subsidy MA&NCGT: A path-following polynomial equations systems approach for computing Nash equilibria SC&CGT: Pareto optimal and popular house allocation with lower and upper quotas MA&NCGT: Strategy-Proof House Allocation with Existing Tenants over Social Networks |
3C2-2 | Paolo Turrini |
SC&CGT: Multivariate Algorithmics for Eliminating Envy by Donating Goods SC&CGT: Welfare vs. Representation in Participatory Budgeting MA&NCGT: One-Sided Matching Markets with Endowments: Equilibria and Algorithms JAAMAS, MA&NCGT: Designing Efficient and Fair Mechanisms for Multi-Type Resource Allocation |
3C3-1 | Davide Grossi |
SC&CGT: Computing Nash Equilibria for District-based Nominations SC&CGT: Selecting PhD Students and Projects with Limited Funding SC&CGT: Facility Location With Approval Preferences: Strategyproofness and Fairness SC&CGT: Computing Balanced Solutions for Large International Kidney Exchange Schemes |
3C3-2 | Patrick Lederer |
SC&CGT: Coalition Formation Games and Social Ranking Solutions SIM, SC&CGT: Cooperation and learning dynamics under risk diversity and financial incentives JAAMAS, SC&CGT: Towards addressing dynamic multi-agent task allocation in law enforcement JAAMAS, SC&CGT: Voting with Random Classifiers in Ensembles (VORACE) |
3C4-1 | Roel Boumans |
HUM: Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards Individualized and Explainable Robotic Support in Everyday Activities ROBO: Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement Learning KRRP: A Symbolic Representation for Probabilistic Dynamic Epistemic Logic KRRP: Multi-Agent Path Finding for Precedence-Constrained Goal Sequences |
3C4-2 | Minh Kieu |
SIM: Agent-based modeling and simulation for malware spreading in D2D networks SIM: Segregation in social networks of heterogeneous agents acting under incomplete information JAAMAS, COIN, KRRP: Enabling BDI group plans with coordination middleware JAAMAS, SIM, EMAS: Automatic calibration framework of agent-based models for dynamic and heterogeneous parameters |
3C5-1 | Vincent Corruble |
LEARN: Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement Learning LEARN: Learning Theory of Mind via Dynamic Traits Attribution LEARN: Robust Learning from Observation with Model Misspecification LEARN: Anomaly Guided Policy Learning from Imperfect Demonstrations |
3C5-2 | Badran Raddaoui |
EMAS: Testing requirements via User and System Stories in Agent Systems HUM: Justifying Social-Choice Mechanism Outcome for Improving Participant Satisfaction KRRP: A Declarative Framework for Maximal k-plex Enumeration Problems HUM: Towards Pluralistic Value Alignment: Aggregating Value Systems Through Lp-Regression |
3C6-2 | Pallavi Jain |
LEARN: Exploiting Causal Structure for Transportability in Online, Multi-Agent Environments SC&CGT: Three-Dimensional Popular Matching with Cyclic Preferences SC&CGT: Tracking Truth by Weighting Proxies in Liquid Democracy |
Schedule of the AAMAS posters and demo papers
Authors | Title | Type |
AAMAS DAY 1, SLOT PD1C, May 11th, 22.00–23.00 NZST (UTC +12) |
||
Christos Verginis, Zhe Xu and Ufuk Topcu | Non-Parametric Neuro-Adaptive Coordination of Multi-Agent Systems | Poster |
Al-Hussein Abutaleb and Bruno Yun | Chameleon – A Framework for Developing Conversational Agents for Medical Training Purposes | Demo |
Arno Hartholt, Ed Fast, Andrew Leeds, Kevin Kim, Andrew Gordon, Kyle McCullough, Volkan Ustun and Sharon Mozgai | Demonstrating the Rapid Integration & Development Environment (RIDE): Embodied Conversational Agent (ECA) and Multiagent Capabilities | Demo |
Bruno Fernandes, André Diogo, Fabio Silva, José Neves and Cesar Analide | KnowLedger – A Multi-Agent System Blockchain for Smart Cities Data | Demo |
Bruno Fernandes, Paulo Novais and Cesar Analide | A Multi-Agent System for Automated Machine Learning | Demo |
Jan Buermann, Dimitar Georgiev, Enrico H. Gerding, Lewis Hill, Obaid Malik, Alexandru Pop, Matthew Pun, Sarvapali D. Ramchurn, Elliot Salisbury and Ivan Stojanovic | An Agent-Based Simulator for Maritime Transport Decarbonisation | Demo |
John Harwell, London Lowmanstone and Maria Gini | SIERRA: A Modular Framework for Research Automation | Demo |
Matheus Aparecido Do Carmo Alves, Amokh Varma, Yehia Elkhatib and Leandro Soriano Marcolino | AdLeap-MAS: An Open-source Multi-Agent Simulator for Ad-hoc Reasoning | Demo |
Yinghui Pan, Junhan Chen, Yifeng Zeng, Zhangrui Yao, Qianwen Li, Biyang Ma, Yi Ji and Zhong Ming | LBfT: Learning Bayesian Network Structures from Text in Autonomous Typhoon Response Systems | Demo |
Abdul Rahman Kreidieh, Yibo Zhao, Samyak Parajuli and Alexandre Bayen | Learning Generalizable Multi-Lane Mixed Autonomy Control Strategies in Single-Lane Settings | Poster |
Angelo Ferrando and Rafael C. Cardoso | Safety Shields, an Automated Failure Handling Mechanism for BDI Agents | Poster |
Benjamin Irwin, Antonio Rago and Francesca Toni | Argumentative Forecasting | Poster |
Conor F Hayes, Diederik M. Roijers, Enda Howley and Patrick Mannion | Decision-Theoretic Planning for the Expected Scalarised Returns | Poster |
David Klaška, Antonin Kucera, Vit Musil and Vojtech Rehak | Minimizing Expected Intrusion Detection Time in Adversarial Patrolling | Poster |
Fredrik Präntare, George Osipov and Leif Eriksson | Concise Representations and Complexity of Combinatorial Assignment Problems | Poster |
Giulio Mazzi, Alberto Castellini and Alessandro Farinelli | Active Generation of Logical Rules for POMCP Shielding | Poster |
Halvard Hummel and Magnus Lie Hetland | Guaranteeing Half-Maximin Shares Under Cardinality Constraints | Poster |
Helen Harman and Elizabeth Sklar | Multi-agent Task Allocation for Fruit Picker Team Formation | Poster |
Ilias Kazantzidis, Timothy Norman, Yali Du and Christopher T. Freeman | How to train your agent: Active Learning from Human Preferences and Justifications in Safety-critical Environments | Poster |
Jieting Luo and Mehdi Dastani | Modeling Affective Reaction in Multi-agent Systems | Poster |
Lukasz Mikulski, Wojtek Jamroga and Damian Kurpiewski | Towards Assume-Guarantee Verification of Strategic Ability | Poster |
Markus Ewert, Stefan Heidekrüger and Martin Bichler | Approaching the Overbidding Puzzle in All-Pay Auctions: Explaining Human Behavior through Bayesian Optimization and Equilibrium Learning | Poster |
Niclas Boehmer, Tomohiro Koana and Rolf Niedermeier | A Refined Complexity Analysis of Fair Districting over Graphs | Poster |
Pierre Cardi, Laurent Gourves and Julien Lesca | On Fair and Efficient Solutions for Budget Apportionment | Poster |
Sanjay Chandlekar, Easwar Subramanian, Sanjay Bhat, Praveen Paruchuri and Sujit Gujar | Multi-unit Double Auctions: Equilibrium Analysis and Bidding Strategy using DDPG in Smart-grids | Poster |
Theophile Cabannes, Mathieu Lauriere, Julien Perolat, Raphael Marinier, Sertan Girgin, Sarah Perrin, Olivier Pietquin, Alexandre Bayen, Eric Goubault and Romuald Elie | Solving N player dynamic routing games with congestion: a mean field approach | Poster |
Yehia Abd Alrahman, Shaun Azzopardi and Nir Piterman | R-CHECK: A Model Checker for Verifying Reconfigurable MAS | Poster |
Yongjie Yang | On the Complexity of Controlling Amendment and Successive Winners | Poster |
Yuanzi Zhu and Carmine Ventre | Irrational behaviour and globalisation | Poster |
AAMAS DAY 2, SLOT PD2B, May 12th, 14.00-15.00 NZST (UTC +12) |
||
Biyang Ma, Yinghui Pan, Yifeng Zeng and Zhong Ming | Ev-IDID: Enhancing Solutions to Interactive Dynamic Influence Diagrams through Evolutionary Algorithms | Demo |
Hala Khodr, Barbara Bruno, Aditi Kothiyal and Pierre Dillenbourg | Cellulan World: Interactive platform to learn swarm behaviors | Demo |
Naman Shah, Pulkit Verma, Trevor Angle and Siddharth Srivastava | JEDAI: A System for Skill-Aligned Explainable Robot Planning | Demo |
Aldo Iván Ramírez Abarca and Jan Broersen | A Stit Logic of Responsibility | Poster |
Alvaro Gunawan, Ji Ruan and Xiaowei Huang | A Graph Neural Network Reasoner for Game Description Language | Poster |
Anusha Srikanthan and Harish Ravichandar | Resource-Aware Adaptation of Heterogeneous Strategies for Coalition Formation | Poster |
Gaurav Dixit and Kagan Tumer | Behavior Exploration and Team Balancing for Heterogeneous Multiagent Coordination | Poster |
George Li, Arash Haddadan, Ann Li, Madhav Marathe, Aravind Srinivasan, Anil Kumar Vullikanti and Zeyu Zhao | Theoretical Models and Preliminary Results for Contact Tracing and Isolation | Poster |
Isaac Sheidlower, Elaine Short and Allison Moore | Environment Guided Interactive Reinforcement Learning: Learning from Binary Feedback in High-Dimensional Robot Task Environments | Poster |
Ishika Singh, Gargi Singh and Ashutosh Modi | Pre-trained Language Models as Prior Knowledge for Playing Text-based Games | Poster |
Jaleh Zand, Jack Parker-Holder and Stephen Roberts | On-the-fly Strategy Adaptation for ad-hoc Agent Coordination | Poster |
Jinming Ma, Yingfeng Chen, Feng Wu, Xianfeng Ji and Yu Ding | Multimodal Reinforcement Learning with Effective State Representation Learning | Poster |
Kishan Chandan, Jack Albertson and Shiqi Zhang | Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration | Poster |
Palash Dey | Priced Gerrymandering | Poster |
Paul Tylkin, Tsun-Hsuan Wang, Tim Seyde, Kyle Palko, Ross Allen, Alexander Amini and Daniela Rus | Autonomous Flight Arcade Challenge: Single- and Multi-Agent Learning Environments for Aerial Vehicles | Poster |
Sriram Gopalakrishnan and Subbarao Kambhampati | Minimizing Robot Navigation Graph For Position-Based Predictability By Humans | Poster |
Ulrik Brandes, Christian Laußmann and Jörg Rothe | Voting for Centrality | Poster |
Vasilis Livanos, Ruta Mehta and Aniket Murhekar | (Almost) Envy-Free, Proportional and Efficient Allocations of an Indivisible Mixed Manna | Poster |
Vignesh Viswanathan, Megha Bose and Praveen Paruchuri | Moving Target Defense under Uncertainty for Web Applications | Poster |
Wenhan Huang, Kai Li, Kun Shao, Tianze Zhou, Jun Luo, Dongge Wang, Hangyu Mao, Jianye Hao, Jun Wang and Xiaotie Deng | Multiagent Q-learning with Sub-Team Coordination | Poster |
Xiaoyan Zhang, Graham Coates, Sarah Dunn and Jean Hall | An Agent-based Model for Emergency Evacuation from a Multi-floor Building | Poster |
Ziyi Xu, Xue Cheng and Yangbo He | Performance of Deep Reinforcement Learning for High Frequency Market Making on Actual Tick Data | Poster |
AAMAS DAY 2, SLOT PD2C, May 12th, 22.00-23.00 NZST (UTC +12) |
||
Alison Roberto Panisson, Peter McBurney and Rafael H. Bordini | Towards an Enthymeme-Based Communication Framework | Poster |
Anna Maria Kerkmann and Jörg Rothe | Popularity and Strict Popularity in Altruistic Hedonic Games and Minimum-Based Altruistic Hedonic Games | Poster |
Annemarie Borg and Floris Bex | Contrastive Explanations for Argumentation-Based Conclusions | Poster |
Athina Georgara, Juan Antonio Rodriguez Aguilar, Carles Sierra, Ornella Mich, Raman Kazhamiakin, Alessio Palmero Aprosio and Jean-Christophe Pazzaglia | An anytime heuristic algorithm for allocating many teams to many tasks | Poster |
Aviram Aviv, Yaniv Oshrat, Samuel Assefa, Toby Mustapha, Daniel Borrajo, Manuela Veloso and Sarit Kraus | Advising Agent for Service-Providing Live-Chat Operators | Poster |
Dimitrios Troullinos, Georgios Chalkiadakis, Vasilis Samoladas and Markos Papageorgiou | Max-sum with Quadtrees for Continuous DCOPs with Application to Lane-Free Autonomous Driving | Poster |
Diogo Rato, Marta Couto and Rui Prada | Behavior vs Appearance: what type of adaptations are more socially motivated? | Poster |
Dorothea Baumeister and Tobias Alexander Hogrebe | On the Average-Case Complexity of Predicting Round-Robin Tournaments | Poster |
Felipe Garrido Lucero and Rida Laraki | Stable Matching Games | Poster |
Francis Rhys Ward, Francesca Toni and Francesco Belardinelli | On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning Scenarios | Poster |
Henri Meess, Jeremias Gerner, Daniel Hein, Stefanie Schmidtner and Gordon Elger | Reinforcement Learning for Traffic Signal Control Optimization: A Concept for Real-World Implementation | Poster |
Jad Bassil, Benoît Piranda, Abdallah Makhoul and Julien Bourgeois | A New Porous Structure for Modular Robots | Poster |
Jennifer She, Jayesh Gupta and Mykel Kochenderfer | Agent-Time Attention for Sparse Rewards Multi-Agent Reinforcement Learning | Poster |
Juncheng Dong, Suya Wu, Mohammadreza Soltani and Vahid Tarokh | Multi-Agent Adversarial Attacks for Multi-Channel Communications | Poster |
Martino Bernasconi, Federico Cacciamani, Simone Fioravanti, Nicola Gatti and Francesco Trovò | The Evolutionary Dynamics of Soft-Max Policy Gradient in Multi-Agent Settings | Poster |
Michał Zawalski, Błażej Osiński, Henryk Michalewski and Piotr Miłoś | Off-Policy Correction For Multi-Agent Reinforcement Learning | Poster |
Miguel Suau, Jinke He, Matthijs Spaan and Frans Oliehoek | Speeding up Deep Reinforcement Learning through Influence-Augmented Local Simulators | Poster |
Önder Gürcan | Proof-of-Work as a Stigmergic Consensus Algorithm | Poster |
Pallavi Bagga, Nicola Paoletti and Kostas Stathis | Deep Learnable Strategy Templates for Multi-Issue Bilateral Negotiation | Poster |
Panagiotis Kanellopoulos, Maria Kyropoulou and Hao Zhou | Forgiving Debt in Financial Network Games | Poster |
Rafid Ameer Mahmud, Fahim Faisal, Saaduddin Mahmud and Md. Mosaddek Khan | A Simulation Based Online Planning Algorithm for Multi-Agent Cooperative Environments | Poster |
Raphaël Avalos, Mathieu Reymond, Ann Nowé and Diederik M. Roijers | Local Advantage Networks for Cooperative Multi-Agent Reinforcement Learning | Poster |
Samhita Kanaparthy, Sankarshan Damle and Sujit Gujar | REFORM: Reputation Based Fair and Temporal Reward Framework for Crowdsourcing | Poster |
Seyed Esmaeili, Sharmila Duppala, Vedant Nanda, Aravind Srinivasan and John Dickerson | Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and Individual | Poster |
Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer and Nihar Shah | Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment Design | Poster |
Yohai Trabelsi, Abhijin Adiga, Sarit Kraus and S.S. Ravi | Maximizing Resource Allocation Likelihood with Minimum Compromise | Poster |
AAMAS DAY 3, SLOT PD3B, May 13th, 14.00-15.00 NZST (UTC +12) |
||
Arnab Maiti and Palash Dey | Parameterized Algorithms for Kidney Exchange | Poster |
Darshan Chakrabarti, Jie Gao, Aditya Saraf, Grant Schoenebeck and Fang-Yi Yu | Optimal Local Bayesian Differential Privacy over Markov Chains | Poster |
Diyi Hu, Chi Zhang, Viktor Prasanna and Bhaskar Krishnamachari | Intelligent Communication over Realistic Wireless Networks in Multi-Agent Cooperative Games | Poster |
Enwei Guo, Xiumin Wang and Weiwei Wu | Adaptive Aggregation Weight Assignment for Federated Learning: A Deep Reinforcement Learning Approach | Poster |
Erik Wijmans, Irfan Essa and Dhruv Batra | How to Train PointGoal Navigation Agents on a (Sample and Compute) Budget | Poster |
Everardo Gonzalez, Lucie Houel, Radhika Nagpal and Melinda Malley | Influencing Emergent Self-Assembled Structures in Robotic Collectives Through Traffic Control | Poster |
Flavia Barsotti, Rüya Gökhan Koçer and Fernando P. Santos | Can Algorithms be Explained Without Compromising Efficiency? The Benefits of Detection and Imitation in Strategic Classification | Poster |
Guan-Ting Liu, Guan-Yu Lin and Pu-Jen Cheng | Improving Generalization with Cross-State Behavior Matching in Deep Reinforcement Learning | Poster |
Jennifer Leaf and Julie Adams | Measuring Resilience in Collective Robotic Algorithms | Poster |
Jiayu Chen, Jingdi Chen, Tian Lan and Vaneet Aggarwal | Multi-agent Covering Option Discovery through Kronecker Product of Factor Graphs | Poster |
Junsong Gao, Ziyu Chen, Dingding Chen and Wenxin Zhang | Beyond Uninformed Search: Improving Branch-and-bound Based Acceleration Algorithms for Belief Propagation via Heuristic Strategies | Poster |
Justin Payan and Yair Zick | I Will Have Order! Optimizing Orders for Fair Reviewer Assignment | Poster |
Kazi Ashik Islam, Madhav Marathe, Henning Mortveit, Samarth Swarup and Anil Vullikanti | Data-driven Agent-based Models for Optimal Evacuation of Large Metropolitan Areas for Improved Disaster Planning | Poster |
Masanori Hirano, Kiyoshi Izumi and Hiroki Sakaji | Implementation of Actual Data for Artificial Market Simulation | Poster |
Pinkesh Badjatiya, Mausoom Sarkar, Nikaash Puri, Jayakumar Subramanian, Abhishek Sinha, Siddharth Singh and Balaji Krishnamurthy | Status-quo policy gradient in Multi-Agent Reinforcement Learning | Poster |
Ravi Vythilingam, Deborah Richards and Paul Formosa | The Ethical Acceptability of Artificial Social Agents | Poster |
Samuel Arseneault, David Vielfaure and Giovanni Beltrame | RASS: Risk-Aware Swarm Storage | Poster |
Shang Wang, Mathieu Reymond, Athirai Irissappane and Diederik M. Roijers | Near On-Policy Experience Sampling in Multi-Objective Reinforcement Learning | Poster |
Shivika Narang, Arpita Biswas and Y Narahari | On Achieving Leximin Fairness and Stability in Many-to-One Matchings | Poster |
Tesshu Hanaka, Toshiyuki Hirose and Hirotaka Ono | Capacitated Network Design Games on a Generalized Fair Allocation Model | Poster |
Wilkins Leong, Julie Porteous and John Thangarajah | Automated Story Sifting Using Story Arcs | Poster |
Will Ma, Pan Xu and Yifan Xu | Group-level Fairness Maximization in Online Bipartite Matching | Poster |
Yue Jin, Shuangqing Wei, Jian Yuan and Xudong Zhang | Learning to Advise and Learning from Advice in Cooperative Multiagent Reinforcement Learning | Poster |