Learning to Bid with AuctionGym
Best Paper |
Olivier Jeunen, Sean Murphy and Ben Allison
Online advertising opportunities are sold through auctions, billions of times every day across the web.
Advertisers who participate in those auctions need to decide on a bidding strategy: how much they are willing to bid for a given impression opportunity.
Deciding on such a strategy is not a straightforward task, because of the interactive and reactive nature of the repeated auction mechanism.
Indeed, an advertiser does not observe counterfactual outcomes of bid amounts that were not submitted, and successful advertisers will adapt their own strategies based on bids placed by competitors.
These characteristics complicate effective learning and evaluation of bidding strategies based on logged data alone.
The interactive and reactive nature of the bidding problem lends itself to a bandit or reinforcement learning formulation, where a bidding strategy can be optimised to maximise cumulative rewards.
Several design choices then need to be made regarding parameterisation, model-based or model-free approaches, and the formulation of the objective function.
This work provides a unified framework for such ``learning to bid'' methods, showing how many existing approaches fall under the value-based paradigm.
We then introduce novel policy-based and doubly robust formulations of the bidding problem.
To allow for reliable and reproducible offline validation of such methods without relying on sensitive proprietary data, we introduce AuctionGym: a simulation environment that enables the use of bandit learning for bidding strategies in online advertising auctions.
We present results from a suite of experiments under varying environmental conditions, unveiling insights that can guide practitioners who need to decide on a model class.
Empirical observations highlight the effectiveness of our newly proposed methods.
AuctionGym is released under an open-source license, and we expect the research community to benefit from this tool.