Artificial intelligence (AI) is woven into everyday life. Still, traditional AI ignores an entire class of important phenomena where relationships are adversarial. In contrast, adversarial AI focuses on a clash between system designer and malicious adversary, one that aims to harm the designer and the users that consume the services provided by the designer. Damages from adversarial tampering can result from simple Cyber-attacks that prevent Google Home issuing morning wake up alarms or notifying of important meetings, or poisoning of our navigation apps with false traffic jams. Or they can be catastrophic — imagine if a recommending system directs a mass of users to fake news articles shortly prior to an election day, or if a terrorist organization exploits observed knowledge airport patrolling strategy. Unfortunately, these are no mere movie plot devices; these scenarios, from inconvenience to disaster, motivate research on adversarial AI. These are some of the scenarios that my research aims to prevent.
I study incentives in crowdsourcing, considering the general question of how to design payments and bonuses, as well as split up tasks, to elicit the most effort. Also, I consider fundamental problems in the economics of decision-making on networks, particularly about coordination and cooperation problems. One such problem is that of team formation: designing incentives for players to report their preferences over their prospective teammates honestly, while at the same time achieving higher-level social goals, such as maximizing overall welfare and fairness.