A professionally curated list of papers, tutorials, books, videos, articles and open-source libraries etc for Out-of-distribution detection, robustness, and generalization

continuousml continuousml Last update: Jan 12, 2024

OOD Detection, Robustness, and Generalization Maintenance

This repo aims to provide the most comprehensive, up-to-date, high-quality resource for OOD detection, robustness, and generalization in Deep Learning. Your one-stop shop for everything OOD is here. If you spot errors or omissions, please open an issue or contact me at [email protected].

Discord


Primer: Your Neural Network Doesn't Know What It Doesn't Know

gif

OOD Detection represents an emerging trend in deep learning research, focusing on a critical deficiency that often limits the deployment of neural networks in real-world scenarios. Despite the tremendous success, deep learning is usually founded on an important assumption: the data a model encounters during deployment must be 'similar' to what it was trained on, or in other words, in-distribution. Regrettably, our world is not static nor predictable, and neither is the data we feed into our models. A static model that is not adaptive nor robust to changes can quickly become outdated or unreliable.

Equipping a neural network with the ability to say 'no' when faced with unfamiliar input is not merely a convenience; it's an urgent necessity, particularly in safety-critical applications. Understanding and implementing OOD Detection not only strengthens the integrity of a model but also provides a layer of security, ensuring that the vast and unpredictable landscape of real-world data does not become an Achilles' heel for otherwise powerful and sophisticated deep learning systems.


Table of Contents


Researchers

Articles

(2022) Data Distribution Shifts and Monitoring by Chip Huyen

(2020) Out-of-Distribution Detection in Deep Neural Networks by Neeraj Varshney

Talks

(2023) How to detect Out-of-Distribution data in the wild? by Sharon Yixuan Li

(2022) Anomaly detection for OOD and novel category detection by Thomas G. Dietterich

(2022) Reliable Open-World Learning Against Out-of-distribution Data by Sharon Yixuan Li

(2022) Challenges and Opportunities in Out-of-distribution Detection by Sharon Yixuan Li

(2022) Exploring the limits of out-of-distribution detection in vision and biomedical applications by Jie Ren

(2021) Understanding the Failure Modes of Out-of-distribution Generalization by Vaishnavh Nagarajan

(2020) Uncertainty and Out-of-Distribution Robustness in Deep Learning by Balaji Lakshminarayanan, Dustin Tran, and Jasper Snoek

Benchmarks, libraries, datasets, etc

Benchmarks

OpenOOD: Benchmarking Generalized OOD Detection

Libraries

PyTorch Out-of-Distribution Detection

Datasets

Photorealistic Unreal Graphics (PUG) by Meta AI

"Abstract: Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation. Despite such promise, the use of synthetic image data is still limited -- and often played down -- mainly due to their lack of realism. Most works therefore rely on datasets of real images, which have often been scraped from public images on the internet, and may have issues with regards to privacy, bias, and copyright, while offering little control over how objects precisely appear. In this work, we present a path to democratize the use of photorealistic synthetic data: we develop a new generation of interactive environments for representation learning research, that offer both controllability and realism. We use the Unreal Engine, a powerful game engine well known in the entertainment industry, to produce PUG (Photorealistic Unreal Graphics) environments and datasets for representation learning. In this paper, we demonstrate the potential of PUG to enable more rigorous evaluations of vision models."

Surveys

Generalized Out-of-Distribution Detection: A Survey by Yang et al

A Unified Survey on Anomaly, Novelty, Open-Set, and Out of-Distribution Detection: Solutions and Future Challenges by Salehi et al.

Theses

Robust Out-of-Distribution Detection in Deep Classifiers by Alexander Meinke

Out of Distribution Generalization in Machine Learning by Martin Arjovsky

Papers

"Know thy literature"

OOD Detection

(NeurIPS 2023) Dream the Impossible: Outlier Imagination with Diffusion Models by Du et al.

(ICCV 2023) Nearest Neighbor Guidance for Out-of-Distribution Detection [Code] by Park et al.

(CVPR 2023) Distribution Shift Inversion for Out-of-Distribution Prediction [Code] by Yu et al.

(CVPR 2023) Uncertainty-Aware Optimal Transport for Semantically Coherent Out-of-Distribution Detection [Code] by Lu et al.

(CVPR 2023) GEN: Pushing the Limits of Softmax-Based Out-of-Distribution Detection [Video] [Code] by Liu et al.

(CVPR 2023) (NAP) Detection of Out-of-Distribution Samples Using Binary Neuron Activation Patterns [Code] by Olber et al.

(CVPR 2023) Decoupling MaxLogit for Out-of-Distribution Detection by Zhang and Xiang

(CVPR 2023) Balanced Energy Regularization Loss for Out-of-Distribution Detection [Code] by Choi et al.

(CVPR 2023) Rethinking Out-of-Distribution (OOD) Detection: Masked Image Modeling Is All You Need [Code] by Li et al.

(CVPR 2023) LINe: Out-of-Distribution Detection by Leveraging Important Neurons [Code] by Ahn et al.

(ICLR 2023) ⭐⭐⭐⭐⭐ A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet [Code] by Galil et al.

(ICLR 2023) Energy-based Out-of-Distribution Detection for Graph Neural Networks [Code] by Wu et al.

(ICLR 2023) The Tilted Variational Autoencoder: Improving Out-of-Distribution Detection [Code] by Floto et al.

(ICLR 2023) Out-of-Distribution Detection based on In-Distribution Data Patterns Memorization with Modern Hopfield Energy by Zhang et al.

(ICLR 2023) Out-of-Distribution Detection and Selective Generation for Conditional Language Models by Ren et al.

(ICLR 2023) Turning the Curse of Heterogeneity in Federated Learning into a Blessing for Out-of-Distribution Detection by Yu et al.

(ICLR 2023) Non-Parametric Outlier Synthesis [Code] by Tao et al.

(ICLR 2023) Out-of-distribution Detection with Implicit Outlier Transformation by Wang et al.

(ICML 2023) Unsupervised Out-of-Distribution Detection with Diffusion Inpainting by Liu et al.

(ICML 2023) Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting by Bagi et al.

(ICML 2023) Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization [Code] by Ramé et al.

(ICML 2023) Out-of-Distribution Generalization of Federated Learning via Implicit Invariant Relationships by Guo et al.

(ICML 2023) Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection [Video] by Bai et al.

(ICML 2023) Concept-based Explanations for Out-of-Distribution Detectors by Choi et al.

(ICML 2023) Hybrid Energy Based Model in the Feature Space for Out-of-Distribution Detection by Lafon et al.

(ICML 2023) Detecting Out-of-distribution Data through In-distribution Class Prior by Jiang et al.

(ICML 2023) Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability [Code] by Zhu et al

(ICML 2023) In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation [Code] by Bitterwolf et al.

(AAAI 2023) READ: Aggregating Reconstruction Error into Out-of-Distribution Detection by Jiang et al.

(AAAI 2023) Towards In-Distribution Compatible Out-of-Distribution Detection by Wu et al.

(AAAI 2023) Robustness to Spurious Correlations Improves Semantic Out-of-Distribution Detection by Zhang and Ranganath

(MIDL 2023) Know Your Space: Inlier and Outlier Construction for Calibrating Medical OOD Detectors [Project Page] by Narayanaswamy, Mubarka et al.

(TMLR 2022) Linking Neural Collapse and L2 Normalization with Improved Out-of-Distribution Detection in Deep Neural Networks by Haas et al.

(CVPR 2022) ViM: Out-Of-Distribution with Virtual-logit Matching [Project Page] by Wang et al.

(CVPR 2022) Neural Mean Discrepancy for Efficient Out-of-Distribution Detection by Dong et al.

(CVPR 2022) Deep Hybrid Models for Out-of-Distribution Detection by Cao and Zhang

(CVPR 2022) Rethinking Reconstruction Autoencoder-Based Out-of-Distribution Detection by Yibo Zhou

(CVPR 2022) Unknown-Aware Object Detection: Learning What You Don't Know from Videos in the Wild [Code] by Du et al.

(NeurIPS 2022) ⭐⭐⭐⭐⭐ OpenOOD: Benchmarking Generalized Out-of-Distribution Detection [Code] by Yang et al.

(NeurIPS 2022) Boosting Out-of-distribution Detection with Typical Features by Zhu et al.

(NeurIPS 2022) GraphDE: A Generative Framework for Debiased Learning and Out-of-Distribution Detection on Graphs [Code] by Li et al.

(NeurIPS 2022) Out-of-Distribution Detection via Conditional Kernel Independence Model by Wang et al.

(NeurIPS 2022) Your Out-of-Distribution Detection Method is Not Robust! [Code] by Azizmalayeri et al.

(NeurIPS 2022) Out-of-Distribution Detection with An Adaptive Likelihood Ratio on Informative Hierarchical VAE by Li et al.

(NeurIPS 2022) GOOD: A Graph Out-of-Distribution Benchmark [Code] by Gui et al.

(NeurIPS 2022) ⭐⭐⭐⭐⭐ Is Out-of-Distribution Detection Learnable? by Fang et al.

(NeurIPS 2022) Towards Out-of-Distribution Sequential Event Prediction: A Causal Treatment by Yang et al.

(NeurIPS 2022) Delving into Out-of-Distribution Detection with Vision-Language Representations [Video] [Code] by Ming et al.

(NeurIPS 2022) Beyond Mahalanobis Distance for Textual OOD Detection by Colombo et al.

(NeurIPS 2022) Density-driven Regularization for Out-of-distribution Detection by Huang et al.

(NeurIPS 2022) SIREN: Shaping Representations for Detecting Out-of-Distribution Objects [Code] by Du et al.

(ICML 2022) Mitigating Neural Network Overconfidence with Logit Normalization [Code] by Hsu et al.

(ICML 2022) Scaling Out-of-Distribution Detection for Real-World Settings [Code] by Hendrycks et al.

(ICML 2022) Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets by Wei et al.

(ICML 2022) Model Agnostic Sample Reweighting for Out-of-Distribution Learning [Code] by Zhou et al.

(ICML 2022) Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition [Code] by Wang et al.

(ICML 2022) Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities [Code] by Bitterwolf et al.

(ICML 2022) Predicting Out-of-Distribution Error with the Projection Norm [Code] by Yu et al.

(ICML 2022) POEM: Out-of-Distribution Detection with Posterior Sampling [Code] by Ming et al.

(ICML 2022) (kNN) Out-of-Distribution Detection with Deep Nearest Neighbors [Code] by Sun et al.

(ICML 2022) Training OOD Detectors in their Natural Habitats by Katz-Samuels et al.

(ICLR 2022) Extremely Simple Activation Shaping for Out-of-Distribution Detection [Code] by Djurisic et al.

(ICLR 2022) Revisiting flow generative models for Out-of-distribution detection by Jiang et al.

(ICLR 2022) PI3NN: Out-of-distribution-aware Prediction Intervals from Three Neural Networks [Code] by Liu et al.

(ICLR 2022) (ATC) Leveraging unlabeled data to predict out-of-distribution performance by Garg et al.

(ICLR 2022) Igeood: An Information Geometry Approach to Out-of-Distribution Detection [Code] by Gomes et al.

(ICLR 2022) How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection? [Code] by Ming et al.

(ICLR 2022) VOS: Learning What You Don't Know by Virtual Outlier Synthesis [Code] by Du et al.

(AAAI 2022) On the Impact of Spurious Correlation for Out-of-distribution Detection [Code] by Ming et al.

(AAAI 2022) iDECODe: In-Distribution Equivariance for Conformal Out-of-Distribution Detection by Kaur et al.

(AAAI 2022) Provable Guarantees for Understanding Out-of-distribution Detection [Code] by Morteza and Li

(AAAI 2022) Learning Modular Structures That Generalize Out-of-Distribution (Student Abstract) by Ashok et al.

(AAAI 2022) Exploiting Mixed Unlabeled Data for Detecting Samples of Seen and Unseen Out-of-Distribution Classes by Sun and Wang

(CVPR 2021) Out-of-Distribution Detection Using Union of 1-Dimensional Subspaces [Code] by Zaeemzadeh et al.

(CVPR 2021) MOOD: Multi-level Out-of-distribution Detection [Code] by Lin et al.

(CVPR 2021) MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space [Code] by Huang and Li

(NeurIPS 2021) Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection [Code] by Bibas et al.

(NeurIPS 2021) STEP: Out-of-Distribution Detection in the Presence of Limited In-Distribution Labeled Data by Zhou et al.

(NeurIPS 2021) Exploring the Limits of Out-of-Distribution Detection [Code] by Fort et al.

(NeurIPS 2021) Learning Causal Semantic Representation for Out-of-Distribution Prediction [Code] by Liu et al.

(NeurIPS 2021) Towards optimally abstaining from prediction with OOD test examples by Kalai and Kanade

(NeurIPS 2021) Locally Most Powerful Bayesian Test for Out-of-Distribution Detection using Deep Generative Models [Code] by Kim et al.

(NeurIPS 2021) RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection [Code] by Song et al.

(NeurIPS 2021) ⭐⭐⭐⭐⭐ ReAct: Out-of-distribution Detection With Rectified Activations [Code] by Sun et al.

(NeurIPS 2021) ⭐⭐⭐⭐⭐ (GradNorm) On the Importance of Gradients for Detecting Distributional Shifts in the Wild [Code] by Huang et al.

(NeurIPS 2021) Watermarking for Out-of-distribution Detection by Wang et al.

(NeurIPS 2021) Can multi-label classification networks know what they don't know? [Code] by Wang et al.

(ICLR 2021) SSD: A Unified Framework for Self-Supervised Outlier Detection [Code] by Sehwag et al.

(ICLR 2021) Multiscale Score Matching for Out-of-Distribution Detection [Code] by Mahmood et al.

(ICML 2021) Understanding Failures in Out-of-Distribution Detection with Deep Generative Models by Zhang et al.

(ICCV 2021) Semantically Coherent Out-of-Distribution Detection [Project Page] [Code] by Yang et al.

(ICCV 2021) CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue by Tang et al.

(ECCV 2021) DICE: Leveraging Sparsification for Out-of-Distribution Detection [Code] by Sun and Li

(CVPR 2020) Deep Residual Flow for Out of Distribution Detection [Code] by Zisselman and Tamar

(CVPR 2020) Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data [Code] by Hsu et al.

(NeurIPS 2020) CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances [Code] by Tack et al.

(NeurIPS 2020) ⭐⭐⭐⭐⭐ Energy-based Out-of-distribution Detection [Code] by Liu et al.

(NeurIPS 2020) OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection and Classification [Video] by Jeong and Kim

(NeurIPS 2020) Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples [Code] by Nandy et al.

(NeurIPS 2020) Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder [Code] by Xiao et al.

(NeurIPS 2020) ⭐⭐⭐⭐⭐ Why Normalizing Flows Fail to Detect Out-of-Distribution Data [Code] by Kirichenko et al.

(ICLR 2020) Towards Neural Networks That Provably Know When They Don't Know [Code] by Meinke et al.

(ICML 2020) Detecting Out-of-Distribution Examples with Gram Matrices [Code] by Sastry and Oore

(CVPR 2019) Out-Of-Distribution Detection for Generalized Zero-Shot Action Recognition [Code] by Mandal et al.

(NeurIPS 2019) Likelihood Ratios for Out-of-Distribution Detection [Video] by Ren et al.

(ICCV 2019) Unsupervised Out-of-Distribution Detection by Maximum Classifier Discrepancy [Code] by Yu and Aizawa

(NeurIPS 2018) ⭐⭐⭐⭐⭐ (Mahalanobis) A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks [Code] by Lee et al.

(NeurIPS 2018) Out-of-Distribution Detection using Multiple Semantic Label Representations by Shalev et al.

(NeurIPS 2018) Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem [Code] by Hein et al.

(ICLR 2018) Do Deep Generative Models Know What They Don't Know? [Slides] by Nalisnick et al.

(ICLR 2018) ⭐⭐⭐⭐⭐ (OE) Deep Anomaly Detection with Outlier Exposure [Code] by Hendrycks et al.

(ICLR 2018) ⭐⭐⭐⭐⭐ (ODIN) Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks [Code] by Liang et al.

(ICLR 2018) Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples [Code] by Lee et al.

(ECCV 2018) Out-of-Distribution Detection Using an Ensemble of Self-Supervised Leave-out Classifiers [Code] by Vyas et al.

(ArXiv 2018) Learning Confidence for Out-of-Distribution Detection in Neural Networks [Code] by DeVries and Taylor

(ICLR 2017) ⭐⭐⭐⭐⭐ A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks [Code] by Hendrycks and Gimpel

OOD Robustness

(ICLR 2023) Diversify and Disambiguate: Out-of-Distribution Robustness via Disagreement by Lee et al.

(ICML 2023) Learning Unforeseen Robustness from Out-of-distribution Data Using Equivariant Domain Translator by Zhu et al.

(ICML 2023) Out-of-Domain Robustness via Targeted Augmentations [Code] by Gao et al.

(TMLR 2022) The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning by Andreassen et al.

(NeurIPS 2022) Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness by Pinto et al.

(NeurIPS 2022) Provably Adversarially Robust Detection of Out-of-Distribution Data (Almost) for Free [Code] by Meinke et al.

(ICML 2022) Improving Out-of-Distribution Robustness via Selective Augmentation [Video] [Code] by Yao et al.

(NeurIPS 2021) A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness by Diffenderfer et al.

(ICLR 2021) In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness [Code] by Xie et al.

(NeurIPS 2020) Certifiably Adversarially Robust Detection of Out-of-Distribution Data [Code] by Bitterwolf et al.

OOD Generalization

(ICLR 2023) Improving Out-of-distribution Generalization with Indirection Representations by Pham et al.

(ICLR 2023) Topology-aware Robust Optimization for Out-of-Distribution Generalization [Code] by Qiao and Peng

(ICLR 2023) ⭐⭐⭐⭐⭐Modeling the Data-Generating Process is Necessary for Out-of-Distribution Generalization by Kaur et al.

(ICML 2023) Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection [Video] by Bai et al.

(AAAI 2023) On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization by Xin et al.

(AAAI 2023) Certifiable Out-of-Distribution Generalization by Ye et al.

(AAAI 2023) Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization by Zhu et al.

(AAAI 2023) Out-of-Distribution Generalization by Neural-Symbolic Joint Training by Liu et al.

(CVPR 2022) Out-of-Distribution Generalization With Causal Invariant Transformations by Wang et al.

(CVPR 2022) OoD-Bench: Quantifying and Understanding Two Dimensions of Out-of-Distribution Generalization [Video] [Code] by Ye et al.

(NeurIPS 2022) Learning Invariant Graph Representations for Out-of-Distribution Generalization by Li et al.

(NeurIPS 2022) Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors by Wang et al.

(NeurIPS 2022) Functional Indirection Neural Estimator for Better Out-of-distribution Generalization by Pham et al.

(NeurIPS 2022) Multi-Instance Causal Representation Learning for Instance Label Prediction and Out-of-Distribution Generalization [Code] by Zhang et al.

(NeurIPS 2022) Assaying Out-Of-Distribution Generalization in Transfer Learning [Code] by Wenzel et al.

(NeurIPS 2022) Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs [Code] by Chen et al.

(NeurIPS 2022) Diverse Weight Averaging for Out-of-Distribution Generalization [Code] by Ramé et al.

(NeurIPS 2022) ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization by Dong et al.

(ICML 2022) Certifying Out-of-Domain Generalization for Blackbox Functions [Code] by Weber et al.

(NeurIPS 2022) LOG: Active Model Adaptation for Label-Efficient OOD Generalization by Shao et al.

(ICML 2022) Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization [Code] by Ramé et al.

(ICLR 2022) Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations [Code] by Puli et al.

(ICLR 2022) Uncertainty Modeling for Out-of-Distribution Generalization [Code] by Li et al.

(ICLR 2022) Invariant Causal Representation Learning for Out-of-Distribution Generalization by Lu et al.

(AAAI 2022) VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization by Chen et al.

(CVPR 2021) Deep Stable Learning for Out-of-Distribution Generalization by Zhang et al.

(NeurIPS 2021) Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization [Video] by Ahuja et al.

(NeurIPS 2021) On the Out-of-distribution Generalization of Probabilistic Image Modelling by Zhang et al.

(NeurIPS 2021) On Calibration and Out-of-Domain Generalization [Video] by Wald et al.

(NeurIPS 2021) Towards a Theoretical Framework of Out-of-Distribution Generalization [Slides] by Ye et al.

(NeurIPS 2021) Out-of-Distribution Generalization in Kernel Regression by Canatar et al.

(NeurIPS 2021) Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning [Code] by Millbich et al.

(ICLR 2021) Understanding the failure modes of out-of-distribution generalization [Video] by Nagarajan et al.

(ICML 2021) Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization [Code] by Miller et al.

(ICML 2021) Out-of-Distribution Generalization via Risk Extrapolation (REx) by Krueger et al.

(ICML 2021) Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? [Slides] by Zhang et al.

(ICML 2021) Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization [Code] by Baranwal et al.

OOD Everything else

(ICLR 2023) Harnessing Out-Of-Distribution Examples via Augmenting Content and Style by Huang et al.

(ICLR 2023) Pareto Invariant Risk Minimization: Towards Mitigating the Optimization Dilemma in Out-of-Distribution Generalization [Code] by Chen et al.

(ICLR 2023) On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning by Bai et al.

(ICLR 2023) Out-of-distribution Representation Learning for Time Series Classification by Lu et al.

(ICML 2023) Exploring Chemical Space with Score-based Out-of-distribution Generation [Code] by Lee et al.

(ICML 2023) The Value of Out-of-Distribution Data by Silva et al.

(ICML 2023) CLIPood: Generalizing CLIP to Out-of-Distributions by Shu et al.

(ICRA 2023) Unsupervised Road Anomaly Detection with Language Anchors by Tian et al.

(ArXiv 2023) Characterizing Out-of-Distribution Error via Optimal Transport by Lu et al.

(NeurIPS 2022) GenerSpeech: Towards Style Transfer for Generalizable Out-Of-Domain Text-to-Speech [Code] by Huang et al.

(NeurIPS 2022) Learning Substructure Invariance for Out-of-Distribution Molecular Representations [Code] by Yang et al.

(NeurIPS 2022) Evaluating Out-of-Distribution Performance on Document Image Classifiers by Larson et al.

(NeurIPS 2022) OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs by Zhou et al.

(ICLR 2022) Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution by Kumar et al.

(ICML 2022) Improved StyleGAN-v2 based Inversion for Out-of-Distribution Images by Subramanyam et al.

(NeurIPS 2021) The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations [Slides] by Hase et al.

(NeurIPS 2021) POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution Samples [Code] by Le et al.

(NeurIPS 2021) Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data [Code] by Park et al.

(ICLR 2021) Removing Undesirable Feature Contributions Using Out-of-Distribution Data by Lee et al.

Subscribe to our newsletter