Becoming Certain About Uncertainty (BCAU)
The value of models which output some notion of uncertainty is often seen as a fundamental fact. Little attention is given to what kind of uncertainty is being estimated and what the value of its quantification is. In this workshop we invite submissions which demonstrate that a model or algorithm outputting a notion of uncertainty has downstream application for its uncertainty estimate. Or works which show that the quantification of certain kinds of uncertainty does not help with a task that is often given for its motivation. We will also consider papers which propose a taxonomy of forms of uncertainty, or novel evaluation metrics, as long as they are grounded in a downstream application. We hope to stimulate a discussion around what notions of uncertainty there are, what their uses are and how they relate and integrate.
Important Dates
Submission Deadline | September 22nd, 2022 07:00 AM UTC |
Workshop Accept / Reject Notification Date | October 20th, 2022 07:00 AM UTC |
Workshop Date | December 2nd or 3rd, 2022 |
Call for Papers
We invite high-quality extended abstract submissions on different kinds of uncertainty quantification and their downstream application. Some examples (non-exhaustive list):
- Downstream applications of uncertainty quantification in machine learning
- Notions of uncertainty and their taxonomy
- Novel or underappreciated notions of uncertainty (e.g. computational uncertainty)
- Evaluation metrics for uncertainty quantification
- Failure cases illustrating where uncertainty quantification is essential
Submissions
Accepted submissions will be presented during joint poster sessions and will be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals.
Submissions must be anonymous, in NeurIPS format and not longer than 4 pages excluding references, acknowledgements, and supplementary material. Long appendices are permitted but strongly discouraged, and reviewers are not required to read them. The review process is double-blind.
We also welcome submissions of recently published work that is strongly within the scope of the workshop (with proper formatting). We encourage the authors of such submissions to focus on accessibility to the wider NeurIPS community while distilling their work into an extended abstract.
Authors may be asked to review other workshop submissions.
Schedule
Coming Soon!
Invited Speakers

Tamara Broderick (confirmed) is an Associate Professor in the Department of Electrical Engineering and Computer Science at MIT. Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning. She is interested in understanding how we can reliably quantify uncertainty and robustness in modern, complex data analysis procedures. To that end, she is particularly focused on Bayesian inference and graphical models – with an emphasis on scalable, nonparametric, and unsupervised learning. She has been awarded selection to the COPSS Leadership Academy (2021), an Early Career Grant (ECG) from the Office of Naval Research (2020), an NSF CAREER Award (2018), a Sloan Research Fellowship (2018), an Army Research Office Young Investigator Program (YIP) award (2017), Google Faculty Research Awards, an Amazon Research Award and the ISBA Lifetime Members Junior Researcher Award among others.

Philipp Hennig (confirmed) holds the Chair for the Methods of Machine Learning at the University of Tübingen, and is an adjunct senior research scientist at the Max Planck Institute for Intelligent Systems. He studied Physics in Heidelberg, Germany and at Imperial College, London, before moving to the University of Cambridge, UK, where he attained a PhD in the group of Sir David JC MacKay with research on machine learning. Since this time, he is interested in connections between computation and inference. With international collaborators, he helped establish the field of probabilistic numerics. His research was supported, among others, by the Emmy Noether Programme of the German Research Union (DFG), an independent Research Group of the Max Planck Society, and a Starting Grant of the European Commission.

Balaji Lakshminarayanan (confirmed) is a Staff Research Scientist (Tech Lead, Manager) at Google Brain in Mountain View (USA), where he leads a team of research scientists and engineers. Prior to that, he was a Staff Research Scientist at DeepMind. Balaji's research interests are in scalable, probabilistic machine learning. His PhD thesis was focused on exploring (and exploiting) connections between neat mathematical ideas in (non-parametric) Bayesian land and computationally efficient tricks in decision tree land, to get the best of both worlds. More recently, he has focused on probabilistic deep learning, including but not limited to (out-of-distribution) robustness, deep generative models, normalizing flows and variational autoencoders, as well as applying probabilistic deep learning ideas in healthcare and Google products.

Yingzhen Li (confirmed) is a lecturer at the Department of Computing at Imperial College London. Before that she spent 2.5 years as a senior researcher at Microsoft Research Cambridge. Yingzhen is interested in building reliable machine learning systems which can generalise to unseen environments. She approaches this goal using probabilistic modelling and representation learning. Her research topics include (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference / computation techniques, uncertainty quantification for computation and downstream tasks, as well as robust and adaptive machine learning systems.

Andrew Gordon Wilson (confirmed) is an Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at New York University. His research focuses on developing flexible, interpretable, and scalable machine learning models, often involving deep learning, Gaussian processes, and kernel learning. He cares about developing practically impactful methods, while at the same understanding why the methods work, and the foundations for building models that learn and generalize. Andrew is particularly excited about loss surfaces, generalization, probabilistic generative models, physics inspired methods, and Bayesian methods in deep learning. His work has been applied to time series, vision, NLP, spatial statistics, public policy, medicine, and physics.
Organizers

Columbia University

Carnegie Mellon University

Caltech

Columbia University

New York University

Columbia University

AutoDesk AI lab

University of Tübingen

Columbia University
News
May 27, 2022 |
![]() |
---|